Beyond the phrase “you are not your user” part 2
The first part of this series alludes to two ideas. One, product design (and its many similarly named, interactive, user-centered siblings) intersects with and derives from many other disciplines. And two, because design is more mindset than skillset, the attempt of young designers to go forth in their careers and be “user-centered” can have an adverse effect; trying to be unbiased and empathetic at the same time can feel like an exercise in self-denial.
In response, I’ve come up with a mantra that helps me pay deference to design’s multidisciplinary background while also grounding me in the face of its intrinsic complexity:
My user is a human very different than me:
A body and brain with limits
A person with a perspective
And a singular character with particularities
This part of the series focuses on “the body and brain with limits” — and a process for researching and designing in the face of user limitations.
We are sent here, to this Earth, packaged with certain equipment — a big brain and a body that allows us to do a few things really well. We can stand upright, look each other in the eyes, chew quickly, operate with opposable digits, and use language to spread information around. Combining all that we’re best at, we create, cooperate, and communicate — but only to a point.
We have limits.
“Our greatest technological innovations…carry names that claim our prowess over the world: the engine (from ingenium, or ‘ingenuity’) or the computer (from computare, or ‘reckoning together’),” says Siddhartha Mukherjee in The Gene. “Our deepest scientific laws, in contrast, are often named after the limits of human knowledge: uncertainty, relativity, incompleteness, impossibility.”
Put one way, the story of humankind is the story of how we’ve continually fought to defy our limitations. And in the same vein, the story of design is the story of a constant redefinition, recreation, and reinvention of tools, systems, and processes to expand what we can do, make, and be.
For a designer bent on solving a problem, innovation requires grappling on the daily, with a more predictable and banal set of human limitations: we get hungry, thirsty, tired, distracted, stressed, bored. We are prone to irritability, restlessness, fickleness, and are just generally particular and difficult to read.
When it comes to people and their use of products, things like attention span, memory capacity, the (in)ability to multitask, cognitive and physical fatigue, situational awareness, and emotional intelligence have very real effects on usability and acceptability. But in context and in combination, limitations housed in such dimensions can be difficult to assess.
At Civis, I try to do three things consistently to address the limits of my users:
Think in heuristics
Most designers are familiar with the process of heuristic evaluation and Jakob Nielsen’s 10 Usability Heuristics, a set of general rules of thumb that allow for subcategorization of a user’s experience. Some of us have internalized these heuristics such that in our day-to-day, we may reference “error recovery” when debriefing with our team after a user test or consider visibility when mocking up a strategy for notifications.
Learning and internalizing is one thing. More actively thinking about usability in terms of heuristics, however, aids the process of developing testable hypotheses, especially in the domain of user limitations. For example, an interesting dilemma our product development team faces at Civis, is the tension between what we allow our users to do and what we suggest that they do. From a heuristics standpoint, this is a tension between flexibility and error prevention. The relationship we seek with our users dictates that we trust they know what they’re doing when, say, they want to move a large, unwieldy data set from an external source to our Platform, or run a script that triggers an expensive query.
In many cases, the flexibility we provide translates to self-sufficient users who consistently find new and interesting ways to make use of our tools. Sometimes, however, such freedom leads to unexpected consequences: performance issues, resource depletion, tricky error messages, and ultimately confusion or frustration. Hence our dilemma: where do we draw the line between freedom and constraint?
One solution is to introduce mechanisms, by design, that encourage or enforce ‘happy paths.’ Such mechanisms include defaults, strategic navigation, and intentional hierarchy. Each strategy comes with an implicit understanding, the great paradox of practicing behavioral design: anytime one seeks to mitigate a human limitation they must exploit another. Defaults, for example, take advantage of behavioral inertia — the tendency we have to avoid taking direct action if we can help it — to the same extent that they exploit the likelihood that we will adhere to the cultural norm implied by whatever’s preselected. Consider that whenever you must uncheck a box that says “I would like to receive email updates about ‘x’ product”, some designer somewhere is aware that you are very unlikely to uncheck said box, or even, potentially, to notice that it exists.
Designers, like parents or managers or teachers or politicians, cannot escape the reality that they will occasionally be forced to make unpopular choices in the so-called name of the greater good. That said, being aware of the laws that govern human ability, starting with usability heuristics, helps us make choices based on what’s best for the entire user population and not just specific users who feel restricted.
Ultimately, “user-centered design” is an aphorism that does not account for how difficult it is to translate specific feedback from individual people into insights that can be generalized to everyone who uses a product.
Break down problems into smaller problems
While it may be possible to represent your ‘User’ using references to, say, recognition over recall, it is unlikely that the real people you encounter will actually speak to you in heuristics.
“My user control and freedom is hindered by the layout of this sub-navigation” or “The helper text on this component makes it difficult for me to recover from my errors” are not statements you’re bound to hear during a usability test. Instead, you get things like: “It’s hard for me to find stuff”, “I don’t like how this looks,” “This is kinda weird,” and “That’s not what I expected to happen.”
The process by which designers and other user-facing folks translate what users say into what they mean is inherently lossy. In the same way that a compressed JPEG file always turns out a little bit fuzzy, the synthesis documents, pain points, and summary decks that are so often the result of user research and testing, are mere approximations of the original source. Not to mention, the original source, recall, tends towards distraction, stress, frustration, and general murkiness.
At Civis, another tension our product team faces with our users is their tendency to self-diagnose. The people who use our products are smart, often technical individuals who speak the language of data science and think about problems in very specific ways. As a result, they often come to the table with solutions to their own issues in mind. Engaged users and the practice of co-design are the stuff of dreams for many product designers; it is a credit to our users that they are often close to prescient when it comes to giving us feedback.
Still, achieving cohesion, consistency, and usability across an entire product landscape is a larger task than any one person’s capacity to troubleshoot. For this reason, pairing what users say with more formal assessments of how they interact with products in the wild, can help make formulating insights less lossy. My assessments of choice are think-aloud testing and task analysis.
Think-Aloud Tests (sometimes called Cognitive Walkthroughs) involve the self-explanatory process of asking users to verbalize their actions as they perform tasks designed to test the core aspects of an interface, system, or tool. While they are a versatile type of test and can be used at any stage in the iterative process, the quality of the results they yield relates directly to the quality of the tasks being tested.
Formal task analysis can be helpful for understanding complex user workflows and, in turn, generating high-quality tasks. If you’re a designer, like me, relatively new to the domain of your users, it’s likely a good exercise.
The main benefit of task analysis is to help with breaking problems down into their constituent parts, to understand user from the perspective of, well, use. While the process is undeniably tedious, most cross-functional teams provide a shortcut in the form of task-oriented software engineers. Working directly with technical project leads to write tasks for interviews and usability tests has several advantages. One, it blends technical and design thinking to strengthen the cross-functionality of a team. Two, it’s more likely to yield a test or interview guide that is both inductive and deductive. Inductive in the sense that tasks work to test, define, or explore the core aspects of a specific concept or interface, and deductive in the sense that they relate back to larger product patterns. And three, engineers can aid in creating testing frameworks that are more realistic by adding nuance to the scenarios and hypotheticals surrounding prototypes or research questions.
Measure time and effort
Ideology, aphorisms, and complexity aside, the primary goal of any designer aware of a user’s limitations, should ultimately be to mediate, mitigate, or otherwise confront them. With the awareness of limitations comes the opportunity for a certain kind of measurement and evaluation.
A key decision involved in any good scientific process, pseudo or otherwise, is the selection of the metrics used for evaluation. In task-based usability testing, it’s traditional to measure the time it takes users to perform each task and pair those measurements with a Single Ease Question (SEQ) after each task…
…and a System Usability Scale (SUS) at the end of the test. This is a well-accepted industry approach that works for most UX work.
For human factors and ergonomics work (and for our highly technical projects at Civis), the survey of choice is the NASA-TLX (Task Load Index), a 6 item questionnaire developed by NASA in the 1980s to measure perceived effort. The TLX asks users to gauge the mental, physical, and temporal demand of a task and self-evaluate their performance, level of effort, and frustration in completing it.
Filling out questionnaires in addition to performing tasks requires a good degree of effort in and of itself, which is something to keep in mind when designing usability tests. With that said, measuring workload discretely is invaluable to understanding what part of a process is causing users strain.
The elephant in the room, perhaps present through this entire conversation, is that designers, being human, have limitations too. The third and final part of this series will explore the last part of my mantra — ‘the singular character with particularities’ — and dive more deeply into what it means to be an individual designing for individuals.
P.S. we’re hiring product designers!
Interested in tackling complex user research problems? Come work with us!