Bearer is a tool to help security and engineering teams protect sensitive data across their development lifecycle. As the product evolves, we want to add features while maintaining simplicity—without jeopardizing user experience. This goal can be achieved through usability testing!
What is usability testing?
Usability testing is a method for evaluating your product to see how it performs in real contexts. It helps test user behavior, performance, and satisfaction, while consequently offering opportunities to improve the user experience within the product.
Often, in a fast-paced company, user research ends up overlooked because it takes up time and resources. However, all the team's hard work will be wasted if you end up making something that nobody wants to use.
But how can usability testing help in the product design process? It allows you to:
- Discover if users can successfully complete tasks inside the product.
- Evaluate their performance and enjoyment to see how well the design works.
- Test and validate concepts and hypotheses.
- Discover problems and how well the product is solving them.
- Back design decisions and strategy on real data.
- Find solutions that are valuable and efficient to use.
How is usability testing done?
In a usability testing session, the tester performs tasks using the product's user interface. Usability testing leaves you with quantitative and qualitative metrics. Quantitative metrics lead to numerical results, such as average task success rate. Qualitative metrics are descriptive and focus on collecting insights that can be observed but not measured—such as the reasons behind an action. Both can offer valuable insights and can be even better if strategically combined. Usability testing can also be either remote or in-person, moderated or unmoderated. Moderated tests are performed “live”, where the proctor and the tester are both present at the same time. The tester performs unmoderated tests on their own with the help of a set of instructions and prompts.
One example of a testing scenario was our new iteration of our Inventory page layout. The main goal of our test was to validate it by analyzing its usability, including the findability of essential information and whether users could figure out how to perform each available action.
Considering Bearer is a fully remote company, we chose to conduct a remote unmoderated usability test. Based on this, we decided to use Maze as a dedicated online remote testing tool to set up our tasks.
On Maze, it’s possible to create different task blocks with instructions by integrating with Figma and accessing the original design prototype, as well as adding follow-up questions. Then, the testers can complete tasks alone on their own time. After they are done, there is a report with test results and relevant metrics. With this structure, we can analyze both quantitative metrics, such as misclick rate, task success, and duration; and qualitative metrics, like the difficulty they perceived for each task.
How we build a usability test with Figma, Maze, and Notion
Taking our “inventory layout” example mentioned earlier, let’s walk through the process we use to define and build usability tests.
We use the following toolstack to perform a test:
Phase 1: Plan
Tool: Notion
The planning part is important, and allows the product team to define why, what, with whom, and how the tests are performed.
Why: The design team starts by defining the context and goals of the test in order to reinforce the reasons behind the test and what we wanted to achieve with them.
What: Then, we define what kinds of metrics we were taking into consideration to analyze the success of the design.
- Quantitative metrics
- Time spent on the tasks: Seconds and milliseconds.
- Success/failure rates: Number of success/failures by participation.
- Effort: Number of misclicks, heat-map analysis.
- Qualitative metrics
- Difficulty perceived: Range from 1 (very hard) to 5 (really easy) for each task.
- Overall satisfaction: Range of emotions at the end of the entire test.
How: The means by which we will perform the test. For example: We decide to run an evaluative unmoderated test and create the task list.
With whom: How many users, what type of users, etc. For example: The test will be sent to 20 users. After this amount of tests is reached, the results should be analyzed and a report completed.
Phase 2: Set up
Tool: Figma
After planning, it was time to set up the test. I created user flows for each task on Figma and a high-fidelity prototype. As it was an unmoderated test and I wouldn’t get in touch directly with the users, I wanted to be sure the experience was as close to reality as possible.
Tool: Maze
Then, I import the Figma prototype to Maze in order to create missions and blocks based on the planned tasks. Maze provides a link for the test, which I shared with the number of testers needed to get a comprehensive understanding of the user behavior patterns.
Phase 3: Analyze results
Tool: Maze
After reaching the goal of 20 testers, I can access the Maze report and analyze user behavior, patterns, and results. The report for this test showed both good and bad results. For one of the tasks, there was an 85% success rate average, with low perceived difficulty and very little time required to finish it. For another, 1/3 of users didn’t identify a button, leading to a high incidence of mistakes and very poor usability, with most users considering it a complex task.
Tool: Notion
After observing the results of the Maze report, it’s time to write the results on Notion and reflect on the learnings. Based on these results, I could report opportunities for improvement, as well as validate other parts of the design that were successful and didn’t need significant changes. This was later translated into actionable items that were applied to the final design.
Building fast, and testing along the way
You should always keep moving forward with a product, but it’s important to incorporate checks into the process to make sure your customers can make the most out of it. Usability testing, particularly at the design level, is a great way to do this. While testing may add time to this stage in the process, it can save future resources by identifying usability issues before your engineering team builds out the features.