The idea behind task-based usability testing is that any application’s user experience is comprised of various steps along the user’s journey, each of which must be optimized for simplicity and ease of use to guide the user to his or her end goal.

Each task, then, while contributing to a cohesive whole, is also a unique opportunity to create an intuitive and seamless interaction for the user. It is in finding where we fail to do this that we are able to improve our websites and applications.

So if tasks are the building blocks of usability testing, is there a way to think quantitatively about the individual usability of the tasks we ask our testers to complete? Qualitative feedback identifying problem spots is an invaluable (and the primary) return of user testing, but it does not allow us to compare general usability across tasks and see the relative weight users assign to the problems (or ease of use) they faced in each separate task.

With the implementation of the System Usability Scale, we complemented qualitative UX feedback with a way to measure and quantify overall system satisfaction and usability; but even a short 10-item questionnaire like SUS could quickly become burdensome for testers when applied repeatedly after each task.

 

Example comparative statistics

 

Measurable metrics like number of clicks or time taken per task are useful in getting a handle on the effectiveness and simplicity of task designs, and are built into the bones of usability testing anyways, but are not comprehensive; they are more suited for extrapolation and setting targets to hit.

 

The Single Ease Question

A more broadly focused method, which does not pile significant amounts of time, effort, or complexity onto the tester, is the Single Ease Question, or SEQ.

Like SUS, the Single Ease Question uses a Likert Scale-style response system, but the similarities stop there. As its name implies, SEQ is just one question: “How difficult or easy did you find the task?” And the response scale has 7 points, not 5.

This adds room for more nuance and a greater diversity of responses, while still preserving the one-question-only simplicity of the SEQ.

The Single Ease Question has been found to be just as effective a measure as other, longer task usability scales, and also correlates (though not especially strongly) with metrics like time taken and completion rate.

In addition to its usefulness as a quantification tool, the SEQ can provide important diagnostic information with the inclusion of one more query: “Why?” MeasuringU recommends asking testers for the reason behind their ranking following scores of 5 or less (on a scale of 1 to 7) to get to the root of sub-par performances. Though this does in fact double the length of this short questionnaire, the critical value it adds is in tying feedback to a causal relationship with specific problems that you can then act on to improve your website.

 

As part of our effort to provide a full range of both qualitative and quantitative perspectives on UX, TryMyUI has added the SEQ to our toolbox to help you understand the usability no only of your website as a whole, but also of the individual steps on the user’s pathway through it.