The road to a successful digital product isn’t smooth. It’s a journey full of dips, bumps, ramps, fist shaking, and the occasional deadly curve. Our process is to put together a team with overlapping skillsets tailored to whatever business problem we currently need to solve. However, in big, complex projects, the right team may only get you 80% of the way there.

That other 20%? Testing.

Types of research

When most people think about user testing, one of two things typically go through their minds (sometimes both):

  1. That sounds complicated.
  2. That sounds expensive.

These are usually the primary reasons clients are resistant to go forward with testing during a project. Get ready for the revelation: testing isn’t complicated, and while it CAN be quite expensive if we need to run a comprehensive set of tests, most of the time it can be accomplished without spending a fortune.

We have a variety of methods and techniques in our arsenal, ranging from quick run-and-gun user reaction tests and remote task-based usability tests, all the way to moderated in-person tests. We tailor the methodology to the project, and we typically use the faster tests because they help us immensely when we’re in an iterative mode.

One of the misconceptions around usability testing is that you must have stringent requirements for recruiting. If you want a bulletproof scientific test, this has an element of truth to it—but you will still get an amazing amount of actionable data from testing nearly anyone that could come into contact with your product. When doing quick tests, we like to select 5-10 users across a variety of demographics and skill levels and run them through an unmoderated test that records their screen, mouse movements, and voice. We’ve never run a simple test like this and failed to gather specific, actionable data to make the product better.

Types of things to test

Recently, we designed a new booking interface for a client in the travel industry. Since we were forging new ground from an interface perspective, we wanted to run it by a handful of users to test assumptions and refine the design.
The first test we ran was a very inexpensive click test that asked the participants a simple question: “If you were going to start booking a flight, what would you click on?” The entire test logged the element on the page that the participant interacted with, as well as how quickly they interacted with it. This kind of test helps us know if an interface makes sense, as well as how much thinking it requires before the participant makes a choice.

Next, we built an interactive prototype of our interface and wrote several tasks to help guide the participants through a typical scenario. We then executed a remote usability test to identify key areas for iteration. The results surfaced iteration opportunities for the design that were validated through two additional tests, with slight revisions between each one.

Between these two inexpensive testing scenarios, we were able to validate our design decisions and improve the interface significantly.

Weaving strategic testing points into the design process with our clients provides much-needed validation to ensure we’re delivering confident solutions for the various problems we’re solving.