When to Do an Expert Evaluation, and How to Make It Stick

Tom Hall
UX Planet
Published in
8 min readSep 5, 2017

--

Do you ever feel unsure of when you should be doing an expert evaluation, or what the difference is between expert and heuristic evaluations? Do you find that the issues raised by your expert evaluations tend to get lost somewhere between delivering your report and being implemented by developers? You’re not alone!

A small note up front: I’ve seen a lot of different terms for what I’m calling an expert evaluation, the technique where a UX expert reviews an interface and notes potential usability issues. Some alternates include usability inspection, UX audit, heuristic evaluation, expert review, assessment, critique, and so on. With the exception of the word heuristic, these processes generally refer to the same thing. For the purposes of this article, I’m going to use the terms expert evaluation and heuristic evaluation, and I’ll detail what the differences are between those two in a moment.

Let’s get into it.

Expert evaluation vs. heuristic evaluation

The primary difference between an expert evaluation and a heuristic evaluation is right there in the name: heuristics. In a heuristic evaluation the interface is compared against known design principles (the heuristics), and potential violations of each heuristic are noted. The heuristics proposed by Jakob Nielsen are very popular for this purpose, though they’re by no means the only ones you could use.

In contrast, in an expert evaluation some of these heuristics or principles will likely be used implicitly by the reviewer(s), but they won’t be explicitly assigning issues to specific heuristics. Instead, they’ll be relying on their expertise in UX (and with similar interfaces) to help them identify potential issues.

A valid criticism of heuristic evaluations is that they put a lot of trust in the heuristics being used, and in the reviewer’s interpretation of them. A heuristic evaluation does not actually require that the reviewer be a UX professional at all; the definition of a heuristic is “something that enables a person to discover or learn something for themselves”.

Anybody could do a heuristic evaluation if you gave them a reasonably detailed list of heuristics to read over (which isn’t to say that they would necessarily do a good job). In the worst case, this could mirror Andrew Lang’s famous statement that “Politicians use statistics in the same way that a drunk uses lamp-posts — for support rather than illumination.” An expert evaluation implies a deeper level of knowledge and experience.

On the other hand, you could take advantage of the fact that a heuristic evaluation doesn’t strictly require expertise by getting some non-UX members of the team, such as developers, involved in the process. This would probably be less effective at getting them to buy into the UX research process than having them observe usability tests, and I’ve never tried it myself, but it certainly sounds interesting. If anybody has tried this, I’d love to hear about how it went.

For the remainder of this article I’ll be talking about expert evaluations, but it should apply to heuristic evaluations as well.

When should you use expert evaluation?

Expert evaluation is a discount usability method. It can be effective at giving project stakeholders meaningful feedback in less time and for less money than some other, more involved research methods. Evaluations can also be good at identifying areas of the interface that may require further testing, and so they can be used alongside other research techniques.

In discount usability, the idea is to do focused user testing with smaller numbers of users. An expert evaluation can help identify which things to focus on. In short, you can use expert evaluation for some quick wins and to help identify where you should investigate next.

Expert evaluations can be effective on a limited budget (Image from Pexels)

However, there are certainly drawbacks to expert evaluation. If you’re using multiple reviewers (which you should!), it can start to become expensive in a hurry. The evaluation might also uncover a number of minor issues that take the stakeholders time and money to fix, while other issues that have more impact on actual users are missed.

Finally, the reviewers should ideally be an expert both in usability and in the subject domain of the project (e.g. banking, if reviewing a banking app). Depending on how specialized the subject domain is, that could make it difficult to find true experts.

How to create an effective expert review

If you decide that expert evaluation is a good fit for your project, here are some tips on how to do it effectively.

Use more than one expert

For essentially the same reason that we include more than one user in usability testing, you should use more than one reviewer in an expert evaluation. If you do the evaluation by yourself, you’re bound to miss a number of issues. One expert alone may also have some set of biases in what they tend to look for. Another set of eyes will give some different insights, and hopefully provide a more complete picture. There’s a lot of variance in the kinds of issues that different reviewers will find.

How many experts you should use depends on the scale of the product being reviewed, as well as your budget. Since you’re likely doing the evaluation as part of a discount usability initiative, you’re probably not going to want to be lining up a dozen usability experts. Additionally, at some point you’re going to run into diminishing returns.

At ThinkUX, we generally use two or three reviewers. I don’t remember ever being involved in an evaluation that used more than five. Use your best judgment, but 2–5 reviewers should be fine in most cases.

Provide solutions (or don’t)

There are two schools of thought about making recommendations in an expert evaluation. On the one hand, clients pay experts for solutions. As you’re identifying issues in the evaluation, you could provide screenshots, links to other products, or mockups that potentially solve the problem you’ve identified.

Another view is to not make recommendations at all in your evaluations. Making a change costs the client time and money. You don’t want to recommend a change that won’t actually improve the design.

In this case, the alternative is to do an experiment. As I mentioned earlier, one of the strengths of an evaluation is to help identify issues that need further study, so (budget allowing) you could jump straight to studying those issues. For example, you could try creating some prototypes focused specifically on an issue (or issues) you’ve raised, and A/B testing some solutions.

Try some experiments to identify solutions, instead of suggesting what you think might work (Image from Pexels)

If your client isn’t willing to spend money on UX research beyond the evaluation, you may need to convince them. Either way, it’s important to be honest about solutions you feel confident about, and what needs further testing. Even solutions that seem to be working for a different product are no guarantee. Just because something works in one interface doesn’t mean it will work in another. There could be different contexts, different users, or any number of variables that you haven’t considered.

Focus and prioritize

When it comes time to start the evaluation, you should prioritize the kinds of issues that you’re going to be looking for and presenting to the client. It’s a good idea to make a note of every issue that you come across, because even small usability issues can add up and influence how users feel about the product in a negative way (the reverse-halo effect). However, it’s a very bad idea to simply provide the client with a long laundry list of usability issues. They could easily be overwhelming, difficult to put into perspective, and the evaluation will end up at the bottom of a developer’s to-do list. I’ve seen it happen!

Remember: quick wins. You need to clearly rank and prioritize the issues you’ve discovered so that the client knows how to get those quick wins most effectively. You can get some insight into how to prioritize the issues by making sure that you understand the business goals, user personas, and analytics for the project.

Keep the goals of the project in mind when prioritizing issues (Image from Pexels)

When it comes the ranking system itself, you can use whatever criteria you’d like, so long as it’s consistent and you’ve made it clear to the client beforehand. You could use Nielsen’s severity rankings as a starting point.

Another option, suggested by Steve Baty, is to categorize the issues like so:

  • Something that violates a known usability principle (e.g. heuristic or design pattern).
  • Something that needs to be tested further.
  • Something you would do differently, but isn’t necessarily a usability issue.

Number the issues so they can be easily identified

Your numbering system could be something very simple. At ThinkUX, we use Google Slides for creating our expert evaluations. If you include only one issue per slide, you could just use the slide number to identify the issue. More typically, we use a numbering system related to where the issue was found. For example “LP-01” could refer to the first issue we found on the landing page. In our slides, we include the severity rating underneath the issue number, for clear visibility.

This makes it easy to refer to a specific issue outlined in your evaluation when communicating with developers, or when adding tickets to whichever tracking system is being used for development (e.g. Jira). This can make it easier to follow up on which issues have been addressed.

Don’t be TOO critical

It’s easy to be critical. It can be easy to forget that the people who will be reading your review are invested in their product. They might get defensive about some of your criticisms. When something has been done well in the interface, you should include this in your report as well. Does the product have good onboarding? Excellent typography? That’s worth noting. You wouldn’t want them to remove things they’ve done well while they’re fixing other issues.

You should absolutely be honest and clear about the issues that the product has, but don’t forget that nobody wants to get dunked on in front of their colleagues.

Wrap up

Expert (and heuristic) evaluations can be an effective tool, if you remember their limitations. They’re most effective on projects with lower budgets, and at identifying issues for further research.

Dumping a long list of issues on the project stakeholders can lower the chances of the evaluation being effective, so make sure you present your report in a way that increases the chances of it being implemented.

If you liked the article, let me know what you think! You can find more articles by ThinkUX on our blog.

Feel free to get in touch on LinkedIn.

--

--