User experience architecture

July 3, 2009

User testing – a foundation recipe

User testing – the foundation recipe

Improvisation from basics

Every good cook has some treasured foundation recipes: a simple muffin mix to which she can add nuts, chocolate or spices; perhaps a tomato and onion based soup to which she can throw in seasonal vegetables, pasta or chopped ham; maybe a spicey curry base that works well with prawns, chicken or vegetables.

To improvise in the kitchen, firstly master the basics then understand when each variation is appropriate. For a white sauce, add parsley to accompany fish. Add mustard for boiled bacon or cheese for savory pancakes. No onions? Chop a scallion. Left over Tarragon? Chop it up; chuck it in. Last night’s Salsa? Think again!

Experienced usability practitioners follow a similar approach in designing a usability test. It’s applied science; observation and analysis are fundamental. However, depending on goals and constraints, we can look for many things, observe in different ways and choose from a wide range of analytical techniques. As with cooking, there’s a foundation recipe and a wide range of variations.

User testing – the foundation recipe

Here’s a seven-step recipe that covers most types of testing. The two activities in parenthes are not strictly part of the method; they do, however, reduce risk and ensure that you learn from your experience.

1.
Design
study
2.
Recruit
participants
3.
Prepare artefacts
(Pilot) 4.
Observe
Measure
Ask
5.
Analyse data
6.
Report results
7.
Brief client
(Project debrief)

You can expect to have some activity for each step. However, the nature and scope of that activity will vary according to the needs of the client and the culture of the project. Consider a test to assess the safety of a remotely-controlled radiography device. You might plan for hypothesis-testing (design study) using a large sample size (recruit participants) to record error rates (measure) for statistical analysis (analyse data).

The report (report results) might become a formal project deliverable while a handover meeting (brief client) would be essential for a mixed audience of technical, business and medical specialists. It’s the equivalent of high-tea, muffins with chopped dates, walnuts and cinnamon.

For a small-scale “in-flight” study, the model is the same but the activities are smaller and simpler. A formative research design (design study) uses a small sample (recruit participants) to acquire data (observe, ask) for qualitative analysis (analyse data). The results are presented in a PowerPoint deck (report results) and reviewed by the design team and project manager (brief client). This situation is more like a simple dusting of caster sugar – good rather than fancy.

Variations

Here are the nuts, raisins and chocolate chips to add to the basic recipe.

1. Design study Summative, Formative, Benchmark, Competitive, Comparative

User-driven, “chauffeured”
Open-ended, Scripted?

2. Recruit participants Quota sample, Stratified sample, Opportunity sample

Recruit directly, Use an agency

Volunteers, Incentives

3. Prepare artefacts Paper, Static, PowerPoint, Axure etc, Wizard of Oz, Live code
4. Observe Direct, Indirect (video), Remote (e.g.TechSmith)

From a control room, Side-by-side

In a lab, In an office, In the field

4. Measure Count, Time, Code, Checklist
4. Question Active, Passive
Interrupt protocol, Debrief protocol, Before-and-after protocol
5. Analyse data Quantitative, Qualitative

Specific observations, Generalised issues

Descriptive, Analytical

Business impact oriented, Solution feature oriented

6. Report results Document, PowerPoint, Annotated video, Verbal

Formal, Informal, Standardised

7. Brief client Briefing, Review, Action-planning

You can read more about these techniques in books such as Practical Guide to Usability Testing (Dumas) or Human Computer Interaction (Preece et al).

Checklist

The success of a user-test is pretty much determined by the quality of the thinking you do before you book a lab or approach a recruiter. Here’s a checklist that covers the main issues. Use it as the basis of a workshop or planning session before you start on design and logistics.

1. Design study
  • What do want to find out?
    • Summative – is it good enough?
    • Formative – how could it be improved?
    • Benchmark – how good is it now?
    • Competitive – how does it compare to competitors?
    • Comparative – which alternative works best?
  • Who is your target audience?
  • What tasks do you want to test?
  • Who will “drive” – you or your participants?
  • Is it open-ended or does it need to follow a pre-defined path through the prototype?
2. Recruit participants
  • How are you going to find the people you need?
  • What incentives will you offer them?
  • How you are you going to get them in the right place at the right time?
  • How long do you need them for?
3. Prepare artefacts
  • In what form will you show the design to the participants?
  • How interactive does it need to be?
  • How much ground does it need to cover?
  • How high fidelity should it be?
4. Observe
  • What events and outcomes are you looking for?
  • How will you record them?
  • How many observers will you use?
  • How visible should you be?
  • How involved should you be?
  • What balance are you seeking between recording expected events and noticing surprises?
  • How will you ensure that observation does not distort the data?
  • What evidence will you need?
4. Measure
  • What events and outcomes do you want to measure?
  • How will you log the data you need?
  • How will you ensure that the measurement process does not distort the data?
4. Ask
  • What attitudes and insights do you need to capture?
  • When will you capture this information? During a task, after each task? At the end of the study?
  • How will you ask the question? In person, on a form, through the design itself?
  • How will you calibrate this information? Do you need to capture an opinion before each task?
  • How will you record this information?
  • How will you ensure that asking questions does not distort observations and measurements?
5. Analyse data
  • What is the right blend of qualitative, quantitaive and video?
  • What’s the analytical focus: the problems; the causes; the impact; or recommendations?
  • What level of rigor is appropriate and affordable?
6. Report results
  • Who is going to read it? What do they need to know?
  • How long and formal does it need to be?
7. Brief client
  • How do we turn the study into a pragmatic, actionable plan?
  • How do we get commitment to change?

As in the kitchen, get the basics right but be prepared to improvise the detail. That way you’re still ready when you don’t have the right method in the store cupboard.

Advertisements

August 31, 2008

Usability and the happy shopper

It’s a cliché of our discipline; a usable e-commerce site generates more sales. Web analytics show measurable differences in consumer behaviour following design changes. The idea is now sufficiently established that “good experience” is widely seen not so much as a market differentiator but as a hygiene factor, a fundamental tool to deliver a sales and marketing strategy.

Let’s tease this idea apart to identify why good design drives sales and how we can exploit the usability toolkit to get it right. We need to look at how people shop, the limits on economic rationality and the web as a social medium.

How shopping works

A simple task model

  1. Discover a product
    1. Become aware of a product
    2. Find information about the product
    3. Assess personal relevance of the product
  2. Select model
    1. Identify alternative models of the product
    2. Identify alternative suppliers of the product
    3. Assess and compare
    4. Select a model
  3. Select a channel
    1. Identify alternative channels for product
    2. Assess and compare
    3. Select a channel
  4. Buy
    1. Find the product
    2. Manage basket
    3. Specify delivery
    4. Pay

This model includes cognitive, social and practical steps; each step represents a risk of failure. Customers can only buy products they have heard about. They need to understand what a product does and how it could enhance their lives. They look for an outlet that is trustworthy, affordable and convenient.

What shoppers need

Within this task model, there are some important user goals:

  • finding;
  • discovering;
  • understanding;
  • assessing;
  • comparing;
  • choosing; and
  • administering.

Finding means locating information that you know is there. You find a number in the phone book. Discovery is more subtle. It reveals information that you need – but don’t know that you need. For example, you might discover an attractive new variety of hybrid Rose while browsing your garden centre for a watering can. Finding and discovery are goals of Information Architecture. The more complex the product set, the greater the need to focus on “findability”.

Understanding, assessing, comparing and choosing work together. They enable a customer to make a decision about trying something new. Adoption is an intellectual process, a social process and an emotional process. According to Everett Rogers’ influential Diffusion of innovation, potential customers seek information to reduce uncertainty and assess “relative advantage”. Well written content quickly answers the tough question, “What’s in it for me?” Consistent coverage and layout make it easy to compare alternatives. Clear and influential content is the goal of copywriting and information design.

Administering is a chore; where should it be delivered? how do you want to pay? Good software ergonomics ensure that this data entry is efficient and simple. This is good old-fashioned usability at work.

Bounded rationality

Economists generally assume that buyers act rationally, seeking to minimize their costs and maximize their returns. Shoppers seek information and compare alternatives in order to make sound economic decisions. Herbert Simon refined this model: “maximisers” keep researching till they find the perfect product; “satisficers”, on the other hand, stop when they find a product that is good enough. In either case, well-written and trustworthy information is essential to discourage defection to another site.

Trust

Shopping is a social act. A well-crafted design suggests a professionally-run business. Direct, personal language implies plain dealing. Editorial content can demonstrate interest and expertise. User-generated content adds credibility and social “buzz”. Plain English says, “we don’t need to hide behind the small print.” Explicit privacy and security policies tackle distrust of online payment. Attractive presentation and engaging writing create a sense of well-being that can transfer to a positive view of the products on display.

Designing for a good shopping experience

An effective design actively addresses the needs of shoppers. Here are some examples.

Discovering The catalog reflects the way customers understand and use products. It avoids the “house” taxonomies of manufacturing and marketing. It uses carefully managed see-also links to identify genuine alternatives.
Finding
Understanding Product descriptions are clear, complete, consistent and easy to read. Technical terms are explained. Potential uncertainties are recognised and addressed.
Assessing Product descriptions explain relevant benefits in a direct, concise style. They avoid marketing clichés and adopt a direct, personal style, consistent with the values of potential buyers.
Administration Payment and delivery dialogs exploit sensible defaults, remember preferences and tightly integrate payment engines to eliminate repeated data entry.

A good shopping experience


“I came across Kopi coffee on the coffeemajic site. It’s the world’s most expensive coffee, made from beans ‘predigested’ by a civet. Because it sounded fascinating and vile in equal measure, I sought out buyer reviews on the company’s community board and read about the personal experiences of the chief taster. That was enough background for me; I convinced myself I had to try it.
I used the product list to compare the price and flavour of different brands and quickly settled on one variety to check out.

I thought about picking up a packet at my local deli. However, because the site remembers my details, it was just faster to buy it there and then.”

August 9, 2008

Data good; findings better

Filed under: evaluation — Tags: , , , , , — uxarchitecture @ 11:06 am

I get to read a lot of usability studies. Some are insightful and persuasive, clearly communicating the main issues and inviting action. Others contain indigestible inventories of raw data. Here are some examples:

  • a long list of specific errors;
  • an exhaustive set of annotated screen shots; or
  • a table of design problems grouped by page.

A heuristic evaluation can generate hundreds of expert comments. Likewise, a skilled observer can capture many subtle observations by analysing the video from a usability study. Data is good – but data is exactly what it is, the raw material from which a skilled analyst extracts findings.

Here’s what clients tell me they want to know.

  1. How well does it work?
  2. What are the major problems?
  3. What’s the impact on my users and my business
  4. What do I need to do to fix it?
  5. How can my design team learn from this?
  6. How do I know you’ve done thorough and impartial work?

The missing step in these “briefcase buster” reports is analysis. A usability practitioner needs the ability to mine hundreds of data points to extract the one or two pages of insight that truly answer the client’s questions. There are many methods including; shuffle-the-post-it, qualitative analysis and mapping to guidelines. Here’s a route-map.

  1. Analyse data to create findings. A finding describes a pervasive issue: the graphic design is primitive; the actions do not match the user’s task model; terminology is arcane and inconsistent.
  2. Support findings with selected data. This demonstrates rigor, illustrates abstract ideas with concrete examples and adds emotional impact.
  3. Describe the specific impact on the business: higher learning costs; lower adoption; brand damage; reduced sales.
  4. Recommend design changes: follow the Windows style guide for radio button behaviour; do not use a fixed font size; describe business processes in plain English.
  5. Recommend tools and methods improvements: consider using a professional graphic designer; construct a task model before designing screens; read the Polar Bear book.

Good findings should be high level, clear, business-focused and actionable. Above all, to paraphrase the good Doctor, “Speak the client’s language” To us it’s a research project, to them it’s an investment.

Create a free website or blog at WordPress.com.