Usability can have a serious impact on society. Poor usability in a voting ballot, for example, could swing an election. Poor usability on a road could cost lives. Usability impacts everyone, every day, at every level of society. This is especially true in the digital era. The more complex the products we build are, the more important usability becomes.
Usability testing isn’t an end but a means that’ll help you reach an informed decision about what you’re testing. In a business context, metrics are key in calculating ROI. They’ll help you find problems you might have missed, like inefficiencies or redundancies in your user flow. They’ll help you show whether a change would actually improve your current product or not. And most of all, they’ll help you determine the costs and benefits of what you build before you start spending any money on it.
Still, I often hear doubts about the need for usability testing in a business plan. Do they really matter? Won’t they just be expensive? Who needs metrics when you can go with your gut feeling? Unfortunately, most of these doubts are little more than myths. Here are five of them, and why they’re wrong.
Usability Testing Myth #1: It’s too expensive
Usability testing is expensive in two cases: when you hire a company to do everything for you or when you need to work with a large sample size and combine complex methods. Conducting usability tests that include many different groups and require collecting both qualitative and quantitative data can cost you. That much is true.
But usability testing doesn’t have to be done on a massive scale and with a lot of complexity every time. Using web-based solutions can give you a wealth of data about usability at a fraction of the cost. That’s what we do here at Mooncascade. Tools like Google Analytics and Mixpanel are part of nearly every solution we build for our clients. Adding events to a feature usually takes less than a minute per feature. The whole process ends up lasting about a day and costs a few hundred euros. Compared to testing projects that can take up to a month to complete, the savings and increase in efficiency here are enormous.
Another inexpensive option is in-house analytics. If your product includes a feature that requires users to log in, for example, implementing analytics literally involves adding one extra line into your code. Don’t forget to think outside the box, too. During our work with Coop, an Estonian bank and grocery store, we simply went to the company’s grocery store in person, showed the product to shoppers, gathered feedback, and went back to the office. It took a couple of hours, a few team members, and voilà! We had our usability metrics.
Usability Testing Myth #2: Usability metrics don’t help us understand causes
If we’re talking about measuring quantitative data, this simply isn’t true: usability metrics do help us understand causes. And they do a great job of it, too. For example, if you place data points at each step in a user flow, you can use the data gathered to locate drop-offs along the way. Many analytics tools offer plug-in metrics (e.g., asking a user for their email) that make this process even easier. Another possibility is to have your website generate error codes, which will allow you to pinpoint any technical issue leading to user drop-offs with serious precision.
This is exactly how Mooncascade conducted usability testing while helping Monese develop their onboarding flow. Data points allowed us to quickly understand what we were facing and where it was taking place. Combined with more qualitative testing like user observation, this also helped us see why people weren’t moving forward. If, for example, users found themselves staring at a particular screen for thirty seconds then abandoning the process, we were able to confirm what the numbers had suggested: this part of the user flow was the cause of the problem and needed to be improved.
Usability Testing Myth #3: You can just trust your gut
Gut feelings might be right for you, but it’s important to remember that you aren’t your users. Building a product means opening it up to different age groups, education levels, and languages. When we were working with Monese, who were developing a product for the UK market, we quickly found that the language they were using wasn’t always clear to their user base, which included quite a few people from countries that don’t speak English.
Even if you’re working in a context with a homogenous target group, gut feelings vary so widely that they simply can’t be used as a reliable marker for driving your decisions. You may have heard of Richard LaPiere’s famous “Attitudes vs. Actions” study: in 1930s America, a Chinese couple visited over 250 hotels and restaurants and were turned away only once. LaPiere then sent a survey to each business visited, asking whether or not they would accept to serve Chinese customers in their establishment. 92% answered “no”.
During our work with the postal company Omniva, we designed an app to be used by everyone working there. The user research we carried out during development proved that our gut feelings were often wrong. What we had assumed would be good performance for everyone wasn’t the same across the board. For couriers, minutes matter. For workers who sort packages, it’s milliseconds. We were even caught off guard by our expectations for design: we had to magnify all of the app’s fonts so that the post office workers, who were mostly older people, could actually use it!
When you build products, you’re looking for direct, measurable results. Gut feelings aren’t just vague and unreliable, they can also be spectacularly wrong. Yet another reason why usability metrics and an appropriate usability testing regime are essential.
Usability Testing Myth #4: Metrics don’t apply to new products
Not only do metrics apply to new products—they’re even more important in this context. You don’t want to go live blindly. Right before launching Monese’s app, we had a tough discussion about which features we should leave out or add later on in the process. We considered cutting metrics from the initial release, as it wouldn’t prove immediate value to the client.
In the end, we decided to keep them and go forward with usability testing. It was the right thing to do. When testing the Android version of the app, we pinpointed issues users were having with the onboarding flow, which was originally quite complex. We were able to fix this early on, which prevented costs incurred from sharing the app with everyone and having thousands of users partially use the product without generating any revenue.
Despite the tight schedule we were on, taking a little extra time to conduct some usability testing helped us avoid plenty of cost-inducing headaches down the road and ensure that Monese’s launch was as brilliant as it could be.
Usability Testing Myth #5: Metrics aren’t understood or appreciated by management
This might be the easiest myth to debunk, as metrics are in some way the only thing management understands! You can’t defend an idea with a gut feeling. And you can’t necessarily do it with expertise either. This is something we encounter with clients all the time. But metrics strengthen projects and proposals through measurable data. Proof that can be quantified and monetized isn’t just convincing—it’s ideal.
I witnessed this recently, as Mooncascade was discussing how much it should invest in an updated version of its webpage. One of our product managers built up a case based on data from usability testing, which compared the current version to a new version and showed how the improved website would generate significantly more value for us. That’s exactly what management wanted to hear. So our new website is in the works. You can read more how we did it from our blog post How To Improve Your Data-driven Decision Making.
Combine usability metrics to create value
Usability testing is most powerful when used wisely. There’s no need to pour money into huge projects when smart, DIY testing will work just as well (if not better). And remember that metrics are most useful for generating value when combined. Don’t focus only on qualitative data, which can lead you astray, or quantitative data, which can be limited when interpreted alone. Combine both to understand what’s happening, where it’s happening, and why. Before you know it, you’ll be building a great product, with even greater value.
WANT TO IMPROVE THE USABILITY OF YOUR PRODUCT?
If you’d like to learn more about how to improve the usability of your product, Mooncascade is here to help. As practicing design thinking, focusing on user experience and integrating data-driven approach to the product development are at the core of what we do, we’re the best partner for it.