At TryMyUI, we do a lot of comparative usability testing – from our monthly UX Wars series to our CompUX client reports. No website exists in a vacuum, and seeing how yours compares to your competitors’ is critical for making important roadmap decisions.

Where does your competitor’s website or app hold the edge? What are they doing right that you can learn from? And where are the strong points in your own design? Learning the answers to these questions will give you a strong grounding for understanding where to take you product and how to market it.

Here are the top 5 things we tell customers who are looking to run a comparative usability study.

 

1. Keep tasks the same

One of the key tricks is to keep your tasks as similar as possible so that the results are directly comparable. If you can, choose a scenario and set of tasks that are exactly the same, including order and word choice. This may require you to get creative; make sure to frame your tasks in a way that is equally applicable to both sites, and use words that aren’t found on either so you don’t give an advantage to one or the other.

However, two sites, even competitors, are rarely identical, and it’s likely you’ll have to make some accommodations. Sometimes similar sites will be designed with the same functions in a different order, or with one or two central functions that are sharply different.

The key is to design a genuine, true-to-life user journey for each site that will return relevant insights, while also ensuring that your test designs are close enough to allow side-by-side comparison.


Read more: Writing usability testing tasks


 

2. Different testers for different sites

Typically, you’ll want to use different testers for each site, rather than having the same people test both.

Since both sites are offering a competing product, service, or experience, the design and structure of the first site will invariably affect testers’ perceptions and expectations of the second. People create schemas for how different functions should look, feel, and work, and once they have seen one site’s version, they are more likely to have trouble with versions that differ.

This is the same reason we recommend using different testers for longitudinal research on a single site – once people have learned a system, it colors their subsequent experiences, and their test results will not reflect a typical user’s journey.

 

3. Recognize the important issues

Not all usability problems are created equal. Understanding the weight of various issues is important to seeing how two sites really compare. When thinking about what matters most, these are some questions to consider:

How serious is the problem itself? Does it completely obstruct the function at hand, or only slow it down?

How crucial is the affected function? Is it auxiliary to the user experience, or fundamental?

How did users respond to the problem? Were they annoyed, frustrated? Who did they blame for the problem?

The more successful site is not the one with the smallest tally of issues, but the one that better enables users to achieve their end goals. For example, if your website centers around a search function and the search is unusable, no amount of UX brownie points from the menu layout or the locator can make up for it (check out the RateBeer vs BeerAdvocate UXWars issue to see an example of this).

 

4. Make quantitative comparisons

Measuring the user experience in quantifiable terms is a great way to take an objective look at comparative usability.

The bulk of your insights will come from watching users struggle with usability issues firsthand, but looking at the results through a quantitative lens is very important for grounding and contextualizing, as well as eliminating personal biases.

You may think you’re objective, but it’s easy to subconsciously minimize the issues your own site has while focusing heavily on the problems of your competitor’s. Widely-used quantification scales like SUS and the SEQ are a great tool for taking a more clear-eyed look at the results, and also allow direct side-by-side comparison between system and system, task and task.

 

5. Go below the surface

Not every usability problem will look like a problem. Some issues are subtle enough that the user doesn’t notice that their experience has suffered. This may occur when users shoulders the blame for mistakes themselves, and therefore don’t say anything about it.

Other times it’s not that there’s a problem, but rather that there’s simply room for improvement – an observation that’s much easier to detect in a comparative usability study. Keep an eye out for spots where the user experience may be just alright. Turning those moments into stellar experiences is a key to creating a successful website that people will want to return to.


Read more: It’s the little things in design


 

There’s always something your competitor is doing right that you can learn from; those insights are some of the most valuable takeaways from any comparative usability test.