Hello world
Welcome to the first issue of The UXR’s Annotations. To those who have already subscribed, I appreciate your support.
If we don’t know each other, I’m Thomas, and it’s great to meet you.
Each month’s issue will contain noteworthy UX and UX-adjacent content I’ve read that month. I’ll provide my notes on why it is important, what you should know about it, and food for thought. Each issue might, or might not, center on a theme.
In this issue, I’ll discuss:
An article on task analysis, adding commentary on its current place in UX practice
An article about rage clicks and when they are useful vs. not
A book on Kaizen and how it can improve how UX teams work
Five Questions Concerning Task Analysis
Dr. Doug Gillan, North Carolina State University | Article | 2013
In this article, Gillan provides an excellent overview of task analysis (TA). The article is organized into a series of questions:
What is task analysis?
Tasks are goal-oriented user actions (e.g., book a flight, purchase toothpaste, etc.). Analysis is breaking an object down into its elements. Put that together, and task analysis is breaking a user’s task into its sub-goals, including steps, operators, and sub-processes, so we can better understand, describe, and evaluate what users do.
Why do task analysis?
Task analyses can be used to define a user’s goals, describe their tasks, evaluate how an interface supports the user in completing their task, and inform the design of new prototypes by identifying areas for increased efficiency (reducing steps) or effectiveness (redesigning steps with high error probabilities).
When might you do task analysis?
Given its usefulness in defining and describing what users do, and evaluating and informing design, task analysis is best performed early in the design process.
How do you conduct task analysis?
Task analysis is an approach, not a singular method. Further, there is no single best method; Gillan provides an overview of the most common techniques: Task Description, Hierarchical Task Analysis (HTA), GOMs, Modular TA, and PRONET.
Whither task analysis?
In other words, “what might the future hold for task analysis?” Gillan makes a few predictions:
The use of task analysis will continue as long as it is useful
It will remain useful far into the future
In 50 years, task analysis will still exist but in different forms
Automated data collection will start to make task analyses easier to conduct
I have been a fan of task analysis for a long time. However, most UX teams don’t conduct task analyses, and what some call “task analysis” is something altogether different.
Exactly why task analysis isn’t more widely used is a mystery to me. Other methods/approaches commonly referenced in usability literature, like usability testing, inspection methods (like heuristic evaluation), and participatory design, all have their place in common UX practice. Those other methods might seem more valuable, and TA might seem too time-consuming, theoretically complex (looking at you, PRONET), or old school (some methods date back to the 1970s). On the other hand, a lack of awareness may be keeping TA out of common practice; TA is almost exclusively taught in HF or HCI grad programs. Perhaps the biggest reason is that the proliferation of usability testing tools in the past 10-15 years has made it so much easier to test designs with end-users that analytical methods are generally less common practice than before.
However, task analysis still has its place as a tool for UX practitioners to generate analytical design recommendations. I suggest that you use HTAs at the start of a redesign process. Once you have a HTA reflecting how users interact with your current system, you can identify where task efficiency (look for steps that can be shortened or eliminated) and effectiveness (look for steps that are likely to have higher error probabilities or criticality for errors) can be improved in your redesign.
I’m biased (Doug was my advisor), but to me, this is the article that anyone learning about task analysis should read first. I hope you read this article and consider using task analysis in your work going forward if you don’t already.
What Rage Clicks Can Tell Us About User Experience
Jessica Graham, Cyber-Duck | Article | 2021
I recently re-read this article after two separate clients requested that my team report on how many users rage clicked during a usability test.
Here’s the TL;DR for the article: “Rage Clicks”= A user making rapid, successive clicks on a particular area of an interface out of frustration. If you have a tool that allows you to observe these rage clicks, keep track of where they occur on your website and use that as a shorthand for prioritizing where to improve usability.
Now, on to thoughts and reactions:
More and more analytics platforms (like Hotjar, Fullystory, and even Google Tag Manager) provide ways to track rage clicks.
In the article above, Graham outlines what is likely to be the best use case for rage clicks: Implement a way to track them, use them to identify and prioritize areas for improvement, & run follow-up usability testing to uncover why they occur.
As for my clients’ requests to report the number of rage clicks in a usability testing scenario, why would we want to do this? Certainly, we have other measures we could report to better and more precisely summarize the usability of a system than rage click frequency. Further, if we did have a rage click, would it not be better to describe why it happened qualitatively? I’ll go on record saying that I think rage clicks are cool and that tracking them is a smart way of problem discovery. However, they seem to have little utility as a summative or prescriptive measure.
Takeaways:
Implementing passive rage click tracking has emerged as a great way for teams to identify areas of their interfaces needing improvement.
The utility of rage clicks as an outcome measure in usability testing is questionable, given more prescriptive and precise alternatives (e.g., you’ll get more milage out of measuring other behavioral metrics, like time on task, and attitudinal measures, like SEQ, when doing usability testing).
The Spirit of Kaizen: Creating Lasting Excellence One Small Step at a Time
Dr. Robert Maurer | Book | 2012
Kaizen, which I understand to mean “good change,” is a concept that favors small, continual improvements over large, sweeping changes. Kaizen is typically associated with iterative improvements in industrial processes (think automotive assembly). However, in this book, Maurer reapplies Kaizen principles to management, outlining various ways that a mindset of continual improvement helps teams and businesses perform better.
This book has very practical tips, but it is more geared towards those in management/leadership functions, has its weaker chapters, and requires the reader to put up with some pop-psych ramblings. All that said, here are my cliff notes for how UX managers and leaders could apply Kaizen principles to their teams:
Establish a Kaizen mindset — First, for Kaizen to work, you and your team need to buy into the idea that small changes can yield significant results. For managers, I’d suggest reading the book, and either providing a copy for all the members of your team, or setting aside your next staff meeting to share the principles and benefits of Kaizen.
Make the most of everyone’s perspective — Kaizen should not be top-down; team members should drive improvements. Your team are experts in their processes; everyone has a valuable perspective they can share. Your role is only to lead and enable a growth mindset. As you establish a Kaizen mentality to managing your team, make it clear that you’re interested in their ideas for how they would go about improving their work.
Also, as you onboard new hires, you should encourage them to share their outside perspectives. Maurer suggests being direct with new team members and telling them, “our system reflects the best ideas we’ve had so far. But I expect you to tell me if you see a way to do things better.”
Set aside time for Kaizen-ing — You might find it helpful to establish a dedicated time for Kaizen-ing; a group Kaizen session with your team to discuss their suggestions. To get the most out of these sessions, I suggest setting ground rules (e.g., ask “how could we improve our research processes? Come up with ideas that make a single change and cost nothing to implement”) and using a diverge and converge agenda (i.e., have team members brainstorm individually for 5-10 minutes, then share their ideas to the group).
Field, follow through, & shoutout — Acknowledge feedback and suggestions during your Kaizen sessions, but more importantly, take diligent notes and follow through on suggestions. Make sure to start each Kaizen session with acknowledgments and shoutouts for the improvements that your team made through their suggestions in the previous meetings.
Handle mistakes and issues productively — Mistakes, no matter how small, are opportunities for us to improve ourselves and how we work with stakeholders. You might choose to use one of your dedicated Kaizen sessions to specifically discuss mistakes. Following Maurer’s advice, your agenda might look like this:
First, establish that you want to hear about mistakes early & often. Blame shouldn’t be a part of our mindset.
Next, collaboratively define the mistakes that your team wants to avoid at all costs.
Using that list, brainstorm what early warning signs for these mistakes could be.
Having this list of mistakes and possible warning signs, discuss what problems you frequently encounter and what errors we might be ignoring.
Finally, discuss how you set up a safe space to talk about errors. This plan should include deciding how we share our mistakes and how we pull out the lesson from them.
Improving the User Experience through Kaizen — Kaizen doesn’t just help our internal processes and procedures. We can apply Kaizen to improving the digital experiences we work on. Asking questions like “What is a small but annoying problem that affects our users?” or “Is there one change that we could make that could make [goal] easier for [our users]”? is the cornerstone for great product discovery and incremental innovation.
Kaizen is simply a mindset that allows us to make small and meaningful changes to our work. By applying these principles and setting aside time for Kaizen, you can expect your team’s processes to become more efficient, your output to be higher quality, and your relationships with stakeholders to improve.
Until next month…
Thanks again for being part of the inaugural issue. If you have any thoughts or reactions, I welcome your reply.
Cheers,
Thomas