The Vicious Cycle of Recruiting With Unpaid Work

The pandemic has left lots of people unemployed, including many UX professionals. Some companies are taking advantage of this situation to automate and scale their recruiting. (Historically this has been called “carpetbagging“, and it hasn’t gone away.)  I have seen one company even make the completion of unpaid work part of their application process: after an initial screening interview with a third-party recruiter, the applicant is sent a link to, where he or she is asked to watch a video of a user interacting with the company’s software and then evaluate the session. Applicants are not compensated for this work, which the recruiter told me takes about three quarters of an hour.

Forty-five minutes of uncompensated work in exchange for a shot at a steady job might seem like an okay deal for someone just entering the field, or who is for other reasons desperate. But if ten people go through this process, then the company has received 7.5 hours of free labor. If 100 people go through this process, then they have received nearly two weeks of free labor. It becomes easy to see how the incentives become misaligned.

When the prospect of being hired is drastically reduced, going through an application process like this is an unambiguously negative experience. Luckily, quality UX candidates have a passion for improving experiences. They want to work somewhere they can put this passion to use, which means a place where their recommendations will be taken seriously. A company that persists in putting people through bad experiences will ultimately fail to attract quality candidates; it’s a vicious cycle.

The unpaid work I described above was ostensibly meant to show the company how the candidate evaluates a usability session. Here are two alternatives to that kind of recruiting method:

  1. Pay applicants for the time they spend evaluating your usability sessions. That at least keeps the incentives more aligned and steers clear of unethical “carpetbagging” practices.
  2. Talk to candidates instead. Quality candidates will be willing to spend time interviewing, because an interview gives them visibility into the process they’re participating in, realtime feedback about how they’re doing, and a personal sense of who they’re going to be working with.

The elusive “questioning attitude”: What it is, Why it’s important, and How to cultivate it

An internet search for “questioning attitude” (include the quotes if you’re trying this at home) turns up article after article about this worker characteristic. It’s often stated to be a desirable trait in disciplines like nuclear power generation, construction management, and other industries where errors can be extremely costly, even deadly. You want people in those fields who are not satisfied that “everything looks OK” just because it seems so at first glance. This trait is also crucial in the social sciences, including my occupation, UX research.

When a researcher has a questioning attitude it doesn’t just mean “likes asking lots of questions”. Asking questions is fundamental to the job, but a questioning attitude is about being aware of the assumptions one makes and then testing whether those assumptions are true. If they aren’t true, then they get discarded. While designing those tests and interpreting the results one still needs to be constantly on the lookout for unfounded assumptions in a kind of recursive pattern all the way down, ensuring experimental design and the findings generated rest on solid bedrock of what has been established as true (as best as can be ascertained given the constraints).

A questioning attitude doesn’t stop being important just because the immediate risks seem low. Any technology built without a questioning attitude can have dramatic negative effects: an annoying Learning Management System can set people back in their careers; a frustrating payment workflow can cost a company millions of dollars; a facial recognition algorithm that’s relatively blind to people of certain races can cause those people to feel alienated or worse. These problems often happen because the designers of the technology work off of assumptions derived from what is familiar to them, and don’t consider that they might not be representative of their users.

If a questioning attitude means being aware of one’s own assumptions, how does one gain that awareness? Based on reading accounts of people who demonstrate a strong questioning attitude, talking to senior colleagues, and drawing from my own experience, I think this awareness can be cultivated from exposure to unusual and uncomfortable situations. Getting to closely know people who are very different from yourself, living in a far-away place that’s very unlike where you’ve spent most of your life, and learning and performing a wide variety of new skills are some of the ways to expose yourself to these situations.

Part of what this provides is the ability to switch into a “man from Mars” mentality, where you can see things afresh, without value judgment or preconceived notions (similar to what I learned from searching through a hot dumpster for a pair of lost keys). Critically, it also hones a rebellious instinct to look where nobody else around you is looking, to draw connections where nobody else is drawing them, etc. Not every place you look and not every connection you draw will be valid, but without this rebellious instinct important considerations are bound to be forgotten.

Humans naturally (by virtue of genetics and formative development) have different levels of self-awareness and rebelliousness, so to some extent the amount of “questioning attitude” present among UX researchers at a given company could have to do with which researchers that company hires. But I believe it is still a skill that can be maximized for each individual, and should be to produce the best design outcomes.

DACUM as user research blitz

When I conduct research with users of internal enterprise systems, a significant portion of my interviews is spent learning about users’ roles, duties, and tasks. This information is critical to understanding the context in which users interact with their technology, and what their goals are when using it.

A few months ago I learned about a systematic process dedicated to uncovering and ordering this information. The process is called DACUM, an acronym for Developing a Curriculum. It exists to support training development, since trainers need to know what duties and tasks comprise the various roles within their organizations so they can develop training content for them, and also identify training gaps. I have been working closely with a training development team, and had the privilege of sitting in on a DACUM workshop. I hope to eventually become certified to moderate them myself.

Whereas interviews can take weeks to plan, administer, and analyze, a DACUM workshop takes two days and generates a concise and efficient set of artifacts listing all the duties and tasks for a given role. I have found that observing a DACUM workshop instills a reasonably confident level of understanding about the role discussed. I would otherwise not expect to attain that level of understanding without conducting and analyzing data from a dozen or more interviews.

A DACUM workshop operates somewhat like a focus group, with a panel of subject matter experts (SMEs) and a certified moderator walking them through a semi-structured discussion. The SMEs all share a particular role or job title in common but may (and ideally do) vary in years of experience, work location, and other factors. Through collaborative brainstorming and analysis between the moderator and the SMEs, the key duties of the SMEs’ role are listed and ordered, and then the same method is applied to the tasks that fall under each duty. Other items such as required tools and common acronyms are also listed. These then become the basis of a set of artifacts to which training development personnel can later refer.

Observing a DACUM workshop is beneficial to me as a UX researcher because it affords – in only two days – an in-depth look at a user role, and a strong basis from which to further investigate existing needs not only in learning and training but also in technology and other systems, potentially shaving weeks off my research effort. This means I can deliver findings and recommendations on tighter deadlines, and dedicate time to other research activities.

More information on DACUM can be found at

“Pain points”

“A pain point by any other name…”

“Pain points” is a UX term of art referring to steps in a process or workflow that users typically dislike, find problematic, or even seek to avoid or work around.

Basically all UX practitioners understand that this idiom doesn’t necessarily mean the user literally experiences pain, only that the user finds some aspect of the experience to be negative and, presumably, desirable to change or eliminate.

Pain points can of course be very serious, for example if an emergency worker has to spend an extra minute fidgeting with a tricky latch in order to access some life-saving piece of equipment.

But due to the nature of UX work, the vast majority of pain points identified in user workflows are trivial: they are sometimes little things that irk or inconvenience people (e.g. having to orient a key a certain way so it can be inserted into a lock), and other times they are problems most people are not even aware they have until there is a solution (e.g. many people say they did not realize being disconnected from the internet while out and about was a problem until they owned a smartphone).

Does the use of this dramatic-sounding phrase introduce or reinforce a bias on the part of the UX practitioner? Specifically, I am referring to a bias in which we are inclined to escalate the stated seriousness of problems, or to solve problems that did not need solving. I’m not sure whether this is happening; the names we give things are important and transformative—but sometimes they aren’t. The aforementioned escalation could be happening for plenty of other reasons, but this doesn’t rule out bias resulting from our language being one of them.

So, I often add scare quotes to the term “pain points” as a way to exercise caution and remind myself not to become biased.

Personae, then and now

The first personas I ever created were based on a template I inherited. I was really just filling in blanks, except I redid the graphical portions of it. The original graphics had vertical sliders to show levels of some discrete qualities of the users. I replaced these with horizontal sliders in order to downplay the relationship between those qualities, because at a glance it erroneously looked like the curves created by the array of sliders had meaning. I determined this was less of an issue with horizontal sliders.

On subsequent projects, I created new persona formats for increased scannability, graphics that were more direct and transparent, and content categories based around information I knew my team and I would want to refer back to. This turned into an ongoing internal challenge: the quest for a more useful persona, one that isn’t just a perfunctory artifact designed to be shown once on a slide in a presentation to stakeholders, but an actual tool the UX team will use throughout the development of the system.

To do this we had to consider what kinds of information about users we likely need at a glance 1 month, 3 months, 6 months, or 2 years into a project. Some information might be useful now but not later, or later but not now.  Because of the way personas tend to get used by the business, we leaned toward information that was immediately useful near the beginning of a project but made sure to fortify it with content that would continue to be useful later on as reminders of important high-level information.

That’s where we started to get into things like work culture and values. To be honest, the best way to represent that in something like a persona is a challenge I’m still thinking through and have ideas about. It’s something I’d like to continue to work on in upcoming projects.