In this issue, I’ll discuss three things I’ve read recently:
An article on task analysis and its current place in UX practice
An article about rage clicks, and their situational utility
A book on Kaizen, and how it can improve how UX teams function
Five Questions Concerning Task Analysis
Dr. Doug Gillan, North Carolina State University | Article | 2013
In this article, Gillan provides an overview of task analysis (TA) that is an excellent introductory read on the topic. The article is organized into a series of questions:
What is task analysis?
Tasks are goal-oriented user actions (e.g., book a flight, purchase toothpaste, etc.). Analysis is breaking an object down into its elements. Put that together and task analysis is the process of breaking a user’s task into its sub-goals, including steps, operators, and sub-processes, so we can better understand, describe, and evaluate what users do.
Why do task analysis?
Task analyses can be used to define a user’s goals, describe their tasks, evaluate how an interface supports the user in completing their task, and inform the design of new prototypes by identifying areas for increased efficiency (reducing steps) or effectiveness (redesigning steps with high error probabilities).
When might you do task analysis?
Given its usefulness in defining and describing what users do, and evaluating and informing design, it shouldn’t be a surprise that task analysis is best performed early in the design process.
How do you conduct task analysis?
Task analysis an approach, not a singular method. Further, there is no single best method, so instead, Gillan provides a list of the most common approaches: Task Description, Hierarchical Task Analysis (HTA), GOMs, Modular TA, and PRONET.
Whither task analysis?
In other words, “what might the future hold for task analysis?” Gillan makes a few predictions:
The use of task analysis will continue as long as it is useful
It will remain useful far into the future
In 50 years task analysis will still exist but in different forms
Automated data collection will start to make task analyses easier to conduct
I have been a fan of task analysis for a long time, and have applied it when redesigning interfaces. However, the current state of task analysis in the industry is that most teams don’t do it, and what some call “task analysis” is something altogether different.
The exact reason why task analysis isn’t more widely used is a mystery to me. Other methods/approaches commonly referenced in usability literature like usability testing, inspection methods (like heuristic evaluation), and participatory design all have their place in common UX practice. Those other methods might seem more valuable, and TA might seem too time-consuming, theoretically complex (looking at you PRONET), or old school (some methods date back to the 1970s). It could be a lack of awareness that keeps TA out of common practice; it seems like TA is almost exclusively taught in HF or HCI grad programs. Perhaps the biggest reason is that the proliferation of usability testing tools in the past 10-15 years has made it so much easier to test designs with end-users, that analytical methods are generally less practices than before.
However, task analysis still has its place as a tool for UX practitioners to generate analytical design recommendations. I suggest that you use HTAs at the start of a redesign process. Once you have a HTA reflecting how users interact with your current system, you can identify where task efficiency (look for steps that can be shortened or eliminated) and effectiveness (look for steps that are likely to have higher error probabilities or criticality for errors) can be improved in your redesign.
I’m biased (Doug was my advisor), but to me, this is the article that anyone learning about task analysis should read first. I hope you read this article and consider using task analysis in your work going forward if you don’t already.
What Rage Clicks Can Tell Us About User Experience
Jessica Graham, Cyber-Duck | Article | 2021
I recently re-read this article after two separate clients requested that my team report on how many users rage clicked during a usability test.
Here’s the TL;DR for the article: “Rage Clicks”= A user making rapid, successive clicks on a particular area of an interface out of frustration. If you have a tool that allows you to observe these rage clicks, keep track of where they occur on your website and use that as a shorthand for prioritizing where to improve usability.
Now, on to thoughts and reactions:
It seems like more and more analytics platforms (like Hotjar, Fullystory, and even Google Tag Manager) provide ways to track these rage clicks
In the article above, Graham outlines what is likely to be the best use case for rage clicks: Implement a way to track them, use them to identify and prioritize areas for improvement, & run follow-up usability testing to uncover the why they occur.
As for my clients’ requests to report the number of rage clicks in a usability testing scenario, why would we want to do this? Certainly, we have other measures we could report to better and more precisely summarize the usability of the system than rage click frequency. Further, if we did have a rage click, would it not be better to describe why it happened qualitatively? I’ll go on record as saying that I think rage clicks are cool, and that tracking them is a smart way of problem discovery, however, they seem to have little utility as a standalone measure.
Takeaways:
Implementing passive rage click tracking has emerged as a great way for teams to identify areas of their interfaces in need of improvement
The utility of rage clicks as an outcome measure in usability testing is questionable, given better alternatives
The Spirit of Kaizen: Creating Lasting Excellence One Small Step at a Time
Dr. Robert Maurer | Book | 2012
Kaizen, which I understand to mean “good change”, is a concept that favors small, continual improvements over large, sweeping changes. Kaizen is typically associated with iterative improvements in industrial processes (think automotive assembly). However, in this book, Maurer reapplies Kaizen principles to management, outlining various ways that a mindset of continual improvement help teams and businesses perform better.
This book has its practical tips, but it is more geared towards those in management/leadership functions, has its weaker chapters, and requires the reader to put up with some pop-psych ramblings. All that said, here are my cliff notes for how UX managers and leaders could apply Kaizen principles to their teams:
Establish a Kaizen mindset — First, for Kaizen to really work you and your team need to buy into the idea that small change can yield big results. For managers I’d suggest reading the book, and either providing a copy of the book for all the members of your team, or setting aside your next staff meeting to share the principles Kaizen and how they will benefit your team.
Make the most of everyone’s perspective — Kaizen should not be top-down, improvements should be driven by team-members. Your team are experts in their processes; everyone has valuable perspective they can share. As you establish a Kaizen mentality to managing your team make it clear that you’re interested in their ideas for how they would go about improving quality or efficiency.
Also, as you onboard new team members, you should encourage that they share their outside perspective. Maurer suggests being direct with new team members and telling them “our system reflects the best ideas we’ve had so far. But I expect you to tell me if you see a way to do things better.”
Set aside time for Kaizen-ing — You might find it helpful to establish a time for Kaizen-ing; a group Kaizen session with your team to discuss their suggestions. To get the most out of these sessions I might suggest giving it a bit of structure through setting ground rules (e.g., ask “how could we improve our research processes: come up with ideas that make a single change, and cost nothing to implement”) and using a diverge and converge agenda (i.e., have team members brainstorm individually for 5-10 minutes, then share their ideas to the group).
Field, follow-through, & shoutout — Acknowledge feedback and suggestions during your Kaizen sessions, but more importantly, take diligent notes and follow-through on the suggestions. Make sure to start each Kaizen session with acknowledgments and shoutouts to the improvements that your team made through their suggestions in the previous session.
Handle mistakes and issues productively — Mistakes, no matter how small, are opportunities for us to improve how we work. You might choose to use one of your dedicated Kaizen sessions to specifically discuss mistakes; follwing the advice Maurer outlines this agenda might look like:
First, establish that you want to hear about mistakes early & often. Blame shouldn’t be a part of your mindset.
Next, collaboratively define the mistakes that your team wants to avoid at all costs.
Using that list, brainstorm what early warning signs for these mistakes could be.
Having this list of mistakes and possible warning signs, discuss what problems you frequently encounter and what errors we might be ignoring.
Finally, discuss how you set up a safe space to talk about errors. This plan should include deciding how we share our mistakes and how we pull out the lesson from them.
Improving the User Experience through Kaizen — Kaizen doesn’t just help our internal processes and procedures, the idea of Kaizen can be applied to improving the digital experiences we work on. Asking questions like “What is a small but annoying problem that affects our users?” or “Is there one change that we could make that could make [goal] easier for [our users]”? is the cornerstone for great product discovery and incremental innovation.
Kaizen is simply a mindset, that when applied correctly, allows us to make small but meaningful changes to our work. By applying these principles and setting aside time for Kaizen, you can expect your team’s processes to become more efficient, your output to be higher quality, and your relationships with stakeholders to improve.