Bottom-up standardization

For the past year my team at Ford has been working on a project that involves us visiting the company’s various manufacturing plants across North America, where we’ve been conducting interviews and contextual inquiries with workers. Although we are trying to understand a technology problem that is common to basically all the sites we’ve visited, each one has its own needs, culture, and way of doing things, and each is affected by this technology problem in slightly different ways.

Once our research is turned into a design and that design has been piloted, my team knows there will be some pressure to standardize it across all sites. Meanwhile the various sites each have their own idea of what an ideal solution to this technology problem should be.

Many large organizations I’ve worked for have experienced some version of this same scenario, and it has me thinking about how the success of standardizing any solution depends not only on the solution itself, but on how the process of standardization is approached.

There are advantages and disadvantages to both centralized and localized solutions, but they are typically framed as being in opposition. How can companies break out of this stalemate?

I decided to reframe the situation as something other than a “problem”, in a way that will hopefully be more illuminating. And I took a lesson from several instances where standardization has been achieved with more success.

Understanding a situation as a problem sets up an expectation that you can “solve” it once and for all. If we take our pet solution and try to justify it by overcoming objections, natural human biases will inevitably take over. This is how we end up with the familiar story of leadership ramming an unwanted solution down employees’ throats, or employees avoiding or even sabotaging an unwanted solution rolled out to them by leadership.

It is necessary to deeply understand both sides of the situation. One of the more stimulating things about UX research is getting to hear from different sets of stakeholders, which provides a unique vantage point that I’d liken to stereoscopy. When different sets of stakeholders believe they are in conflict with each other, it can even be like seeing in an extra dimension, almost like cubism.

Calvin and Hobbes goes cubist when Calvin's dad convinced him to see both sides of an issue.
Bill Watterson masterfully illustrates what happens when Calvin sees both sides of an issue.

In my own words, standardization is when you take something that provides benefits in one context and put that same thing into lots of different contexts, with the aim of reaping not only those same benefits over and over again, but additional benefits that come just from having created uniformity.

But it doesn’t always work out this way. Sometimes the other contexts are a poor fit, and often the end users push back, sometimes rightly so. I’d bet everyone can think of some examples of times when attempts to standardize were justifiably unwelcome.

So, what are some strong arguments for and against both standardized or “centralized” solutions and customized or “local” solutions? Below are some I’ve heard over the years:

A table listing pros and cons of both centralized and localized solutions.
I tried to organize this table so pros and cons on the same line mostly corresponded with each other, but really they are in no particular order.

By the way, there is scholarly research backing up some of the items in this table. For example, Farrell et al. published a paper in the 80s in the Rand Journal of Economics about how one of the risks of standardization is it can lock you into an inferior product; their paper has since been cited thousands of times, indicating the enduring salience of that insight.

A side benefit of laying out pros and cons like this is it prompts us to think about the practical and emotional impact of the change so it can be more effectively communicated about when the time comes. And importantly, this exercise decouples our own emotions from whatever solution we ultimately pursue, which helps to make subsequent decisions less prone to biases and blind spots.

With these arguments laid out, it is clear there is no neat easy answer that would please everyone and resolve all the misalignment. It is also clear we are really talking about trade-offs. Whatever the solution, it will need to manage these trade-offs.

Internalizing all this allowed it to become a filter as I absorbed other ideas and examples. It didn’t take long before I stumbled across two that were revelatory.

The first example was from US legal codes. Our system of federalism defines certain responsibilities as those of the Federal government, and leaves others to the states. However, states sometimes have an interest in working from the same set of uniform laws as other states, but in areas outside the proper jurisdiction of the Federal government.

When this happens, I learned, what gets created is called a Uniform Act. These are written collaboratively among different states. There are more than a hundred of them, typically concerning matters related to commerce but sometimes other things like child custody, land use, foreign judgments, controlled substances, and so on. 

The way it works is various state governors appoint members to a body called the Uniform Law Commission, who drafts a copy of whatever act the states will be asked to sign on to. The Commission has no direct legislative power; instead, the acts it drafts only become state laws as each state legislature signs onto them. 

This caught my eye because we are used to hearing about standardization as a top-down thing, but this is essentially a form of bottom-up standardization: using a system of representative government, the people who would be the subjects of standardization get together and decide what it is they’re standardizing to, and then agree to it before it is implemented. And they agree to it because they buy into the idea and think it will work for them. 

How could something like that work for various local sites within large companies? What if there was a technology solution that various sites could opt into? It would require some marketing, in the sense of making sure all the sites knew the system was available, what it entailed, what its benefits were, and how to onboard it, but sites that did make an informed decision to opt in would presumably reap those benefits, without feeling like the solution was foisted on them by ivory tower executives in HQ.

Remember that even an okay system met with enthusiasm by its users is probably going to work better than a perfect system that users feel resentful about.   

The second example was ISO, or the International Organization for Standardization, which oversees the implementation of tens of thousands of technical standards for everything from shipping containers to the light-sensitivity of photographic film. ISO is the reason the nuts you buy at one hardware store match the bolts you might buy at another hardware store—even in another country. Name just about any manufactured product and there’s a good chance there are one or more of these standards in its specifications. ISO standards are at work behind almost every facet of ordinary life and most people don’t realize it.  

But in the last few decades there has been rising controversy over who sets these standards and what they should be. One instance of this involves the increasing share of Chinese manufacturing in the global economy, which has led to a push from China saying they should have a larger seat at the table in developing standards. In an effort to preempt this, national governments worldwide have become more eager to dictate (from the top down) their own standards to whatever manufacturing is within their jurisdiction. 

Advocacy for bottom-up standardization has come from people such as Duff Johnson, Executive Director of the PDF Association, who used that exact term in an article last year in which he said “standards work best for everyone when they are developed openly”, calling for an “organic process”. He recommends that governments engage with industry at the subject matter expert level, creating a win-win in which the government gains expertise, skills, and context, and the industry members can better understand government viewpoints and interests. 

This provides an important perspective on how companies might standardize technology solutions. Executive stakeholders should engage with ground-level employees on the SME level, where each can benefit from exposure to and empathy with the other’s working paradigm. Fortunately, Duff Johnson’s suggestions reflect the way my UX research team is already going about our various projects. 

We have a responsibility to ensure corporate stakeholders understand the SME perspective, and at the same time we have to address the corporate interest in standardization, with its attendant tradeoffs but also undeniable benefits. Our interviews and contextual inquiries are those engagements with subject matter experts, and the insights we collect and synthesize will be shared with corporate stakeholders. So in addition to providing guidelines for the technology designers and developers we work with, this insight about bottom-up standardization represents an opportunity to realize we are forging a bridge between the highest and lowest levels of our company.

Advertisement

Stepping on toes

As a practice, UX can be modeled as having two components: Research and Design. For any UX initiative to be successful, these components have to work together harmoniously. But a user’s experience can also be spoiled when one of these components steps on the toes of the other.

Research and Design are co-iterative as phases; they may overlap, but if UX is being done properly then Research is both the first and last phase — it doesn’t matter how many Design iterations fall in the middle.

Research/Design iterations in UX practice

What I mean by stepping on toes is when a phase that should come later starts too early. Design can step on the toes of Research, for example, by developing prototypes or even finished products that are not informed by Research findings, which is bad on its own but worse if users come into contact with those uninformed artifacts. (Indeed, in UX we often lament instances where this happens, and it happens a lot!) It affects those users’ expectations and perceptions of what the experience should be, and closes doors to what the experience might be.

When Research steps on the toes of Design, you get things like the incident that prompted me to write this blog entry today: I was on a website, entering information into a form. In fact I was typing into a field. In the middle of that task, I was interrupted by a popup asking me for feedback on the website, and I had to take my hands off my keyboard and use my mouse to take further action.

I was interrupted in a task by a popup that looked roughly like this

There are three actions a user might take in this situation:

  1. Give feedback according to how the experience has generally been up to that point
  2. Close the popup, usually without thinking about it
  3. Give feedback that is negatively biased by the recent interruption

I would bet that a tiny fraction of people would respond with Action 1. Personally, I responded with Action 2, as I suspect most people probably would. (When briefly interrupted from a task, it’s normal to simply want to resume the task.) I don’t know how many people will generally go with Action 3, but I would bet it is a much larger proportion than those who would go with Action 1.

The people maintaining the website should have waited until just after users completed the form to ask for feedback. There’s a good chance many people would still take option 2 at that point, but whatever feedback would be collected would not be negatively biased by a recent disruption.

In this case, Research stepped on the toes of Design by asking users for feedback before they could complete the experience as designed. Not only will those users not be able to comment on the whole experience, but when giving feedback they will be biased against whatever came before. This is why it’s important for the two sides of UX not to step on each others’ toes.

If a user interface requires explanation, is it bad?

Some coworkers shared a meme that’s apparently been circulating on LinkedIn:

A USER INTERFACE IS LIKE A JOKE. IF YOU HAVE TO EXPLAIN IT, IT’S NOT THAT GOOD.

The meme is true for some applications, but definitely not for all. For instance, Microsoft Excel features a highly complex interface, but there’s a good reason it’s one of the most utilized business and research tools in the world, and I doubt the interface could realistically be made much simpler while retaining its capabilities.

The meme definitely doesn’t work in reverse: some interfaces require no explanation but are plenty bad regardless. For example: touchscreens are usually considered very intuitive because the user directly manipulates objects on the screen, but they can be difficult for certain people (e.g. those with vision impairment) to interact with in a physical sense rather than a cognitive one. There is an abundance of accessibility cases such as this.

Or, interfaces can require no explanation but be bad because in their simplicity they encourage an unhealthy level of interaction, or any interaction with a product that is unhealthy to begin with; if not for the simple interface, users might interact less with the unhealthy product or avoid it altogether, to their benefit. Think slot machines, social media, etc.

For many of the products I’ve worked on, a balance must be struck between intuitiveness and accommodating power users — those who rapidly learn and integrate into their regular use the non-obvious shortcuts, hidden features, and advanced capabilities of the product. This balance is found through research: talking with stakeholders, listening to and observing users, and learning about the broader context into which the product fits.

It is fun to identify ways a product’s interface can require less explanation — this very activity is what first got me interested in UX, as I suspect is true for many others in my field — but sometimes a simplification that makes a product easier to use for one person makes it less powerful, or in some cases completely unusable, for another.

The Vicious Cycle of Recruiting With Unpaid Work

The pandemic has left lots of people unemployed, including many UX professionals. Some companies are taking advantage of this situation to automate and scale their recruiting. (Historically this has been called “carpetbagging“, and it hasn’t gone away.)  I have seen one company even make the completion of unpaid work part of their application process: after an initial screening interview with a third-party recruiter, the applicant is sent a link to Usertesting.com, where he or she is asked to watch a video of a user interacting with the company’s software and then evaluate the session. Applicants are not compensated for this work, which the recruiter told me takes about three quarters of an hour.

Forty-five minutes of uncompensated work in exchange for a shot at a steady job might seem like an okay deal for someone just entering the field, or who is for other reasons desperate. But if ten people go through this process, then the company has received 7.5 hours of free labor. If 100 people go through this process, then they have received nearly two weeks of free labor. It becomes easy to see how the incentives become misaligned.

When the prospect of being hired is drastically reduced, going through an application process like this is an unambiguously negative experience. Luckily, quality UX candidates have a passion for improving experiences. They want to work somewhere they can put this passion to use, which means a place where their recommendations will be taken seriously. A company that persists in putting people through bad experiences will ultimately fail to attract quality candidates; it’s a vicious cycle.

The unpaid work I described above was ostensibly meant to show the company how the candidate evaluates a usability session. Here are two alternatives to that kind of recruiting method:

  1. Pay applicants for the time they spend evaluating your usability sessions. That at least keeps the incentives more aligned and steers clear of unethical “carpetbagging” practices.
  2. Talk to candidates instead. Quality candidates will be willing to spend time interviewing, because an interview gives them visibility into the process they’re participating in, realtime feedback about how they’re doing, and a personal sense of who they’re going to be working with.

The elusive “questioning attitude”: What it is, Why it’s important, and How to cultivate it

An internet search for “questioning attitude” (include the quotes if you’re trying this at home) turns up article after article about this worker characteristic. It’s often stated to be a desirable trait in disciplines like nuclear power generation, construction management, and other industries where errors can be extremely costly, even deadly. You want people in those fields who are not satisfied that “everything looks OK” just because it seems so at first glance. This trait is also crucial in the social sciences, including my occupation, UX research.

When a researcher has a questioning attitude it doesn’t just mean “likes asking lots of questions”. Asking questions is fundamental to the job, but a questioning attitude is about being aware of the assumptions one makes and then testing whether those assumptions are true. If they aren’t true, then they get discarded. While designing those tests and interpreting the results one still needs to be constantly on the lookout for unfounded assumptions in a kind of recursive pattern all the way down, ensuring experimental design and the findings generated rest on solid bedrock of what has been established as true (as best as can be ascertained given the constraints).

A questioning attitude doesn’t stop being important just because the immediate risks seem low. Any technology built without a questioning attitude can have dramatic negative effects: an annoying Learning Management System can set people back in their careers; a frustrating payment workflow can cost a company millions of dollars; a facial recognition algorithm that’s relatively blind to people of certain races can cause those people to feel alienated or worse. These problems often happen because the designers of the technology work off of assumptions derived from what is familiar to them, and don’t consider that they might not be representative of their users.

If a questioning attitude means being aware of one’s own assumptions, how does one gain that awareness? Based on reading accounts of people who demonstrate a strong questioning attitude, talking to senior colleagues, and drawing from my own experience, I think this awareness can be cultivated from exposure to unusual and uncomfortable situations. Getting to closely know people who are very different from yourself, living in a far-away place that’s very unlike where you’ve spent most of your life, and learning and performing a wide variety of new skills are some of the ways to expose yourself to these situations.

Part of what this provides is the ability to switch into a “man from Mars” mentality, where you can see things afresh, without value judgment or preconceived notions (similar to what I learned from searching through a hot dumpster for a pair of lost keys). Critically, it also hones a rebellious instinct to look where nobody else around you is looking, to draw connections where nobody else is drawing them, etc. Not every place you look and not every connection you draw will be valid, but without this rebellious instinct important considerations are bound to be forgotten.

Humans naturally (by virtue of genetics and formative development) have different levels of self-awareness and rebelliousness, so to some extent the amount of “questioning attitude” present among UX researchers at a given company could have to do with which researchers that company hires. But I believe it is still a skill that can be maximized for each individual, and should be to produce the best design outcomes.

DACUM as user research blitz

When I conduct research with users of internal enterprise systems, a significant portion of my interviews is spent learning about users’ roles, duties, and tasks. This information is critical to understanding the context in which users interact with their technology, and what their goals are when using it.

A few months ago I learned about a systematic process dedicated to uncovering and ordering this information. The process is called DACUM, an acronym for Developing a Curriculum. It exists to support training development, since trainers need to know what duties and tasks comprise the various roles within their organizations so they can develop training content for them, and also identify training gaps. I have been working closely with a training development team, and had the privilege of sitting in on a DACUM workshop. I hope to eventually become certified to moderate them myself.

Whereas interviews can take weeks to plan, administer, and analyze, a DACUM workshop takes two days and generates a concise and efficient set of artifacts listing all the duties and tasks for a given role. I have found that observing a DACUM workshop instills a reasonably confident level of understanding about the role discussed. I would otherwise not expect to attain that level of understanding without conducting and analyzing data from a dozen or more interviews.

A DACUM workshop operates somewhat like a focus group, with a panel of subject matter experts (SMEs) and a certified moderator walking them through a semi-structured discussion. The SMEs all share a particular role or job title in common but may (and ideally do) vary in years of experience, work location, and other factors. Through collaborative brainstorming and analysis between the moderator and the SMEs, the key duties of the SMEs’ role are listed and ordered, and then the same method is applied to the tasks that fall under each duty. Other items such as required tools and common acronyms are also listed. These then become the basis of a set of artifacts to which training development personnel can later refer.

Observing a DACUM workshop is beneficial to me as a UX researcher because it affords – in only two days – an in-depth look at a user role, and a strong basis from which to further investigate existing needs not only in learning and training but also in technology and other systems, potentially shaving weeks off my research effort. This means I can deliver findings and recommendations on tighter deadlines, and dedicate time to other research activities.

More information on DACUM can be found at http://www.dacumohiostate.com

“Pain points”

rose-with-thorns-e1503969619357.jpg
“A pain point by any other name…”

“Pain points” is a UX term of art referring to steps in a process or workflow that users typically dislike, find problematic, or even seek to avoid or work around.

Basically all UX practitioners understand that this idiom doesn’t necessarily mean the user literally experiences pain, only that the user finds some aspect of the experience to be negative and, presumably, desirable to change or eliminate.

Pain points can of course be very serious, for example if an emergency worker has to spend an extra minute fidgeting with a tricky latch in order to access some life-saving piece of equipment.

But due to the nature of UX work, the vast majority of pain points identified in user workflows are trivial: they are sometimes little things that irk or inconvenience people (e.g. having to orient a key a certain way so it can be inserted into a lock), and other times they are problems most people are not even aware they have until there is a solution (e.g. many people say they did not realize being disconnected from the internet while out and about was a problem until they owned a smartphone).

Does the use of this dramatic-sounding phrase introduce or reinforce a bias on the part of the UX practitioner? Specifically, I am referring to a bias in which we are inclined to escalate the stated seriousness of problems, or to solve problems that did not need solving. I’m not sure whether this is happening; the names we give things are important and transformative—but sometimes they aren’t. The aforementioned escalation could be happening for plenty of other reasons, but this doesn’t rule out bias resulting from our language being one of them.

So, I often add scare quotes to the term “pain points” as a way to exercise caution and remind myself not to become biased.

Personae, then and now

The first personas I ever created were based on a template I inherited. I was really just filling in blanks, except I redid the graphical portions of it. The original graphics had vertical sliders to show levels of some discrete qualities of the users. I replaced these with horizontal sliders in order to downplay the relationship between those qualities, because at a glance it erroneously looked like the curves created by the array of sliders had meaning. I determined this was less of an issue with horizontal sliders.

On subsequent projects, I created new persona formats for increased scannability, graphics that were more direct and transparent, and content categories based around information I knew my team and I would want to refer back to. This turned into an ongoing internal challenge: the quest for a more useful persona, one that isn’t just a perfunctory artifact designed to be shown once on a slide in a presentation to stakeholders, but an actual tool the UX team will use throughout the development of the system.

To do this we had to consider what kinds of information about users we likely need at a glance 1 month, 3 months, 6 months, or 2 years into a project. Some information might be useful now but not later, or later but not now.  Because of the way personas tend to get used by the business, we leaned toward information that was immediately useful near the beginning of a project but made sure to fortify it with content that would continue to be useful later on as reminders of important high-level information.

That’s where we started to get into things like work culture and values. To be honest, the best way to represent that in something like a persona is a challenge I’m still thinking through and have ideas about. It’s something I’d like to continue to work on in upcoming projects.