User experience architecture

June 23, 2010

Usability findings: defects or risks?

Insight and advice

When we run a usability test or a heuristic evaluation we create two types of value:  insight and  advice.

Insight is packaged as a set of findings.  Good  insight comes from findings that are accurate, clear and at the  appropriate level of abstraction. Great insight requires an eye for pervasive patterns of design error that mine the detail to extract a few simple, powerful themes.

Advice comes as recommendations. Good recommendations are clear, pragmatic and actionable. Great recommendations also reflect the actual priorities of the business.

Modelling risk

The interesting part is getting from great insight to great advice. One approach is to borrow a model from HRA (Human Reliability Analysis). Reliability analysis is interested in identifying and assessing risk.

Here’s a useful model. r=p*i where r is the risk associated with some factor, p is the probability of an incident and i is the impact of that incident.  For example, we can use this to compare the risk to society of a nuclear meltdown (low p, high i) to the risk from traffic accidents (high p, low i). The results are a decent starting point for making investment decisions on programmes to prevent, detect and recover.

Findings stated as risks

The findings of a usability evaluation are actually predictions. In a test, we investigate the behaviour and attitudes of a sample to infer the behaviour of a population of users. In an inspection, we role play a sample for the same reason.  Our predictions are actually statements of risk.

  1. [Based on the behaviour of our test participants we predict that] sophisticated language will deter a few users from using the menus to proceed beyond the home page.
  2. [Based on the behaviour of our test participants we predict that] misleading visual affordances will mask the interactivity of the product configuration controls for the majority of users.
  3. [Based on the behaviour of our test participants we predict that] due to fixed font sizes,  a few users will be unable to read the privacy statement.

Each of these findings is a prediction grounded in data.  It estimates a probability in terms of a number of users.  It models impact in terms of what the defect prevents the user from achieving. Of course, they are other ways of expressing these factors. p could  be  error rate or frequency of the defect within the design. i could model consequences such as:  user attrition; lost revenue; productivity leakage; hazards or compliance issues.  Choosing the right approach can make for an interesting and enlightening conversation with your client. Alternatively, High (3), Medium (2) and Low (1) may be all you need.

Using the model

So, to assess the risk of the three findings above:

1. risk  = Low (few users) x High (blocked by the home page) = 1 x 3 = 3

2. risk = Moderate (many users) x Moderate (can’t configure a product) = 2 x 2 = 4

3.  risk = Low ( few users) x Low  (privacy statement) = 1 x 1 =1

On a scale of 1 (p=Low : i=Low) to 9 (p=High : i=High), fixing configurator affordances is our highest priority.  Interestingly, on our scheme, it is actually a relatively moderate risk. As with any calculation, if the result jars with your intuition, check your assumptions. Your initial assessment of p and i may be out.

Now we have a risk for each  finding, it’s straightforward to prioritise the recommendations.  If there’s fixed development capacity for usability issues, select findings to tackle by ranking the risk scores.  Otherwise, make an investment decision on whether to address each finding based on it’s  absolute risk score.

You might also want to factor in the cost of the fix – but that’s another calculation for another day.

Blog at