Let’s assume we’re about to embark on a user experience study and want to solicit the opinions of a handful of users, be it for a moderated or unmoderated study. How should we decide what users to pick and listen to? Would any person off the street work, or are their key criteria we should pay attention to?
First, it would seem that the ideal user would be someone representative of your target demographics, and naturally inclined to perform the tasks that you want them to test. For example, in studying how someone would go about booking a vacation, it would be great if the tester was actually looking to book a vacation so the situation was real rather than abstract.
We’ve found the significance of this element to depend on the type of test. When testing consumer sites, for example, shopping for typical goods, or booking a vacation home, we found that people project themselves into the situation quite easily. For example, we found that one person was interested in accommodations that allowed pets, and another immediately looked for date flexibility to accommodate cost constraints, even though these were not explicitly specified in the tasks. But we expect the results to vary based upon how easily the user can relate – for example, if the tasks were related to finding answers to questions about setting up a home wireless network, I imagine it would be difficult for the average user to imagine a relevant set of questions.
Another concern is that the person not be too aware that they’re part of the study, because there is a certain Heisenberg Uncertainty Principle that applies to user studies as well: the more aware the user is of the study (and hence, the less natural the setting), the more it seems like a test and the less relevant the results. We believe that unmoderated, remote usability testing has the advantage over in-lab user groups in this particular concern.
What we’ve found most important though, is to have users that 1) can stay on task, 2) are expressive, and 3) are not afraid of being critical.
On the first point, a surprising number of users, aware that their feedback is requested, start acting as a site marketer, describing the site and its value proposition rather than just being users and staying on task. While some helpful information may be derived from this, what we really want to know is how the users accomplish the indicated tasks.
A second issue is that many users just go through clicking on links without expressing themselves – why did they click on one link rather than another? What were they thinking? The key learning from in-depth user studies is to understand the “why” (since we can get the “what” quite easily from tracking data). While we may not accept the user’s reasoning as being typical, it’s important that we have the data; so it’s essential that the users remove the filter between their brain and their mouth (yes, the filter that we’re working so hard to instill in our children).
Finally, a common issue associated with user studies is the natural human desire to please. Most of us grew up trained to be polite and non-critical, which though admirable in civic scenarios, is a detriment in usability studies. So does that mean we should only take users from New York and New Jersey (you want me to register first, fuhgetaboutit)? Actually, no, it turns out that a lot of people will be expressive after being properly trained, although it’s not quite as simple as one would think.
At TryMyUI, we coach and qualify all of our testers, then have them take a standard test, and provide them with critiques. Those that pass are then qualified to take actual usability tests. Even with all this, we end up passing less than 50% of our would-be testers.
See what kind of feedback you can get from our qualified tester pool by running a free trial test: