Bottom-up standardization

For the past year my team at Ford has been working on a project that involves us visiting the company’s various manufacturing plants across North America, where we’ve been conducting interviews and contextual inquiries with workers. Although we are trying to understand a technology problem that is common to basically all the sites we’ve visited, each one has its own needs, culture, and way of doing things, and each is affected by this technology problem in slightly different ways.

Once our research is turned into a design and that design has been piloted, my team knows there will be some pressure to standardize it across all sites. Meanwhile the various sites each have their own idea of what an ideal solution to this technology problem should be.

Many large organizations I’ve worked for have experienced some version of this same scenario, and it has me thinking about how the success of standardizing any solution depends not only on the solution itself, but on how the process of standardization is approached.

There are advantages and disadvantages to both centralized and localized solutions, but they are typically framed as being in opposition. How can companies break out of this stalemate?

I decided to reframe the situation as something other than a “problem”, in a way that will hopefully be more illuminating. And I took a lesson from several instances where standardization has been achieved with more success.

Understanding a situation as a problem sets up an expectation that you can “solve” it once and for all. If we take our pet solution and try to justify it by overcoming objections, natural human biases will inevitably take over. This is how we end up with the familiar story of leadership ramming an unwanted solution down employees’ throats, or employees avoiding or even sabotaging an unwanted solution rolled out to them by leadership.

It is necessary to deeply understand both sides of the situation. One of the more stimulating things about UX research is getting to hear from different sets of stakeholders, which provides a unique vantage point that I’d liken to stereoscopy. When different sets of stakeholders believe they are in conflict with each other, it can even be like seeing in an extra dimension, almost like cubism.

Calvin and Hobbes goes cubist when Calvin's dad convinced him to see both sides of an issue.
Bill Watterson masterfully illustrates what happens when Calvin sees both sides of an issue.

In my own words, standardization is when you take something that provides benefits in one context and put that same thing into lots of different contexts, with the aim of reaping not only those same benefits over and over again, but additional benefits that come just from having created uniformity.

But it doesn’t always work out this way. Sometimes the other contexts are a poor fit, and often the end users push back, sometimes rightly so. I’d bet everyone can think of some examples of times when attempts to standardize were justifiably unwelcome.

So, what are some strong arguments for and against both standardized or “centralized” solutions and customized or “local” solutions? Below are some I’ve heard over the years:

A table listing pros and cons of both centralized and localized solutions.
I tried to organize this table so pros and cons on the same line mostly corresponded with each other, but really they are in no particular order.

By the way, there is scholarly research backing up some of the items in this table. For example, Farrell et al. published a paper in the 80s in the Rand Journal of Economics about how one of the risks of standardization is it can lock you into an inferior product; their paper has since been cited thousands of times, indicating the enduring salience of that insight.

A side benefit of laying out pros and cons like this is it prompts us to think about the practical and emotional impact of the change so it can be more effectively communicated about when the time comes. And importantly, this exercise decouples our own emotions from whatever solution we ultimately pursue, which helps to make subsequent decisions less prone to biases and blind spots.

With these arguments laid out, it is clear there is no neat easy answer that would please everyone and resolve all the misalignment. It is also clear we are really talking about trade-offs. Whatever the solution, it will need to manage these trade-offs.

Internalizing all this allowed it to become a filter as I absorbed other ideas and examples. It didn’t take long before I stumbled across two that were revelatory.

The first example was from US legal codes. Our system of federalism defines certain responsibilities as those of the Federal government, and leaves others to the states. However, states sometimes have an interest in working from the same set of uniform laws as other states, but in areas outside the proper jurisdiction of the Federal government.

When this happens, I learned, what gets created is called a Uniform Act. These are written collaboratively among different states. There are more than a hundred of them, typically concerning matters related to commerce but sometimes other things like child custody, land use, foreign judgments, controlled substances, and so on. 

The way it works is various state governors appoint members to a body called the Uniform Law Commission, who drafts a copy of whatever act the states will be asked to sign on to. The Commission has no direct legislative power; instead, the acts it drafts only become state laws as each state legislature signs onto them. 

This caught my eye because we are used to hearing about standardization as a top-down thing, but this is essentially a form of bottom-up standardization: using a system of representative government, the people who would be the subjects of standardization get together and decide what it is they’re standardizing to, and then agree to it before it is implemented. And they agree to it because they buy into the idea and think it will work for them. 

How could something like that work for various local sites within large companies? What if there was a technology solution that various sites could opt into? It would require some marketing, in the sense of making sure all the sites knew the system was available, what it entailed, what its benefits were, and how to onboard it, but sites that did make an informed decision to opt in would presumably reap those benefits, without feeling like the solution was foisted on them by ivory tower executives in HQ.

Remember that even an okay system met with enthusiasm by its users is probably going to work better than a perfect system that users feel resentful about.   

The second example was ISO, or the International Organization for Standardization, which oversees the implementation of tens of thousands of technical standards for everything from shipping containers to the light-sensitivity of photographic film. ISO is the reason the nuts you buy at one hardware store match the bolts you might buy at another hardware store—even in another country. Name just about any manufactured product and there’s a good chance there are one or more of these standards in its specifications. ISO standards are at work behind almost every facet of ordinary life and most people don’t realize it.  

But in the last few decades there has been rising controversy over who sets these standards and what they should be. One instance of this involves the increasing share of Chinese manufacturing in the global economy, which has led to a push from China saying they should have a larger seat at the table in developing standards. In an effort to preempt this, national governments worldwide have become more eager to dictate (from the top down) their own standards to whatever manufacturing is within their jurisdiction. 

Advocacy for bottom-up standardization has come from people such as Duff Johnson, Executive Director of the PDF Association, who used that exact term in an article last year in which he said “standards work best for everyone when they are developed openly”, calling for an “organic process”. He recommends that governments engage with industry at the subject matter expert level, creating a win-win in which the government gains expertise, skills, and context, and the industry members can better understand government viewpoints and interests. 

This provides an important perspective on how companies might standardize technology solutions. Executive stakeholders should engage with ground-level employees on the SME level, where each can benefit from exposure to and empathy with the other’s working paradigm. Fortunately, Duff Johnson’s suggestions reflect the way my UX research team is already going about our various projects. 

We have a responsibility to ensure corporate stakeholders understand the SME perspective, and at the same time we have to address the corporate interest in standardization, with its attendant tradeoffs but also undeniable benefits. Our interviews and contextual inquiries are those engagements with subject matter experts, and the insights we collect and synthesize will be shared with corporate stakeholders. So in addition to providing guidelines for the technology designers and developers we work with, this insight about bottom-up standardization represents an opportunity to realize we are forging a bridge between the highest and lowest levels of our company.

Advertisement

Stepping on toes

As a practice, UX can be modeled as having two components: Research and Design. For any UX initiative to be successful, these components have to work together harmoniously. But a user’s experience can also be spoiled when one of these components steps on the toes of the other.

Research and Design are co-iterative as phases; they may overlap, but if UX is being done properly then Research is both the first and last phase — it doesn’t matter how many Design iterations fall in the middle.

Research/Design iterations in UX practice

What I mean by stepping on toes is when a phase that should come later starts too early. Design can step on the toes of Research, for example, by developing prototypes or even finished products that are not informed by Research findings, which is bad on its own but worse if users come into contact with those uninformed artifacts. (Indeed, in UX we often lament instances where this happens, and it happens a lot!) It affects those users’ expectations and perceptions of what the experience should be, and closes doors to what the experience might be.

When Research steps on the toes of Design, you get things like the incident that prompted me to write this blog entry today: I was on a website, entering information into a form. In fact I was typing into a field. In the middle of that task, I was interrupted by a popup asking me for feedback on the website, and I had to take my hands off my keyboard and use my mouse to take further action.

I was interrupted in a task by a popup that looked roughly like this

There are three actions a user might take in this situation:

  1. Give feedback according to how the experience has generally been up to that point
  2. Close the popup, usually without thinking about it
  3. Give feedback that is negatively biased by the recent interruption

I would bet that a tiny fraction of people would respond with Action 1. Personally, I responded with Action 2, as I suspect most people probably would. (When briefly interrupted from a task, it’s normal to simply want to resume the task.) I don’t know how many people will generally go with Action 3, but I would bet it is a much larger proportion than those who would go with Action 1.

The people maintaining the website should have waited until just after users completed the form to ask for feedback. There’s a good chance many people would still take option 2 at that point, but whatever feedback would be collected would not be negatively biased by a recent disruption.

In this case, Research stepped on the toes of Design by asking users for feedback before they could complete the experience as designed. Not only will those users not be able to comment on the whole experience, but when giving feedback they will be biased against whatever came before. This is why it’s important for the two sides of UX not to step on each others’ toes.

If a user interface requires explanation, is it bad?

Some coworkers shared a meme that’s apparently been circulating on LinkedIn:

A USER INTERFACE IS LIKE A JOKE. IF YOU HAVE TO EXPLAIN IT, IT’S NOT THAT GOOD.

The meme is true for some applications, but definitely not for all. For instance, Microsoft Excel features a highly complex interface, but there’s a good reason it’s one of the most utilized business and research tools in the world, and I doubt the interface could realistically be made much simpler while retaining its capabilities.

The meme definitely doesn’t work in reverse: some interfaces require no explanation but are plenty bad regardless. For example: touchscreens are usually considered very intuitive because the user directly manipulates objects on the screen, but they can be difficult for certain people (e.g. those with vision impairment) to interact with in a physical sense rather than a cognitive one. There is an abundance of accessibility cases such as this.

Or, interfaces can require no explanation but be bad because in their simplicity they encourage an unhealthy level of interaction, or any interaction with a product that is unhealthy to begin with; if not for the simple interface, users might interact less with the unhealthy product or avoid it altogether, to their benefit. Think slot machines, social media, etc.

For many of the products I’ve worked on, a balance must be struck between intuitiveness and accommodating power users — those who rapidly learn and integrate into their regular use the non-obvious shortcuts, hidden features, and advanced capabilities of the product. This balance is found through research: talking with stakeholders, listening to and observing users, and learning about the broader context into which the product fits.

It is fun to identify ways a product’s interface can require less explanation — this very activity is what first got me interested in UX, as I suspect is true for many others in my field — but sometimes a simplification that makes a product easier to use for one person makes it less powerful, or in some cases completely unusable, for another.

Privacy and Security in 2021

I just got a new phone, which means a good chunk of time spent slogging through its OS, opening every menu I can find, and turning off any setting that might plausibly translate into packets of information flying through the air without my explicitly sending or asking for them.

As before, my new phone is a “dumb-phone” (what else?), but even candybar and flip phones these days are equipped with all kinds of geolocation, auto-updates, background data, and other transmission capabilities, turned on by default, that could translate into data about me and my life being captured, sold, archived on some corporate server somewhere, and otherwise used to lower my quality of life.

People who see me using a flip phone tend to have the same reaction: praise (“Nice! A flip phone!”), admiration (“Good for you!”), even statements of jealousy (“I wish I could have a phone like that!”). But occasionally I encounter the opposite response: a kind of huffy sneer, as if I’m rocking the boat and holding society back by not getting with the smartphone program. Indeed, if quality of life is defined as maximizing convenience and computer-assisted abilities, I am lowering my quality of life, as well as that of anyone who might want to text me a link (rather than email it), or have me download their app, or whatever else smartphone users do in their sleep but which I kept out of my life.

But in the most basic sense it isn’t true: in this case my quality of life is improved through inconvenience. I don’t have Twitterer’s brain, I don’t interrupt conversations to look things up, and I don’t Google anything. (I use Duckduckgo instead and highly recommend it!)

I almost wrote “I don’t Google anything, ever” but eliminated that last word because it’s not technically true: from my laptop running a VPN and my location data concealed as best I can, I sometimes use Youtube, Google Maps, Google Scholar, and for some collaborative activities I go along with their use of Google Drive.

In each case I have taken reasonable steps to restrict access to my data, but I know it is never perfect, and I also know a motivated and skilled person can probably find a way to get it anyway. There is no perfect privacy or security, ever. In “meatspace” we live in houses with windows, after all. Parabolic and laser microphones are a thing. Camera drones are getting smaller and quieter all the time. You walk into someone’s house and you never know what devices are listening. Surveillance cameras are ubiquitous, from doorbells to stoplights. In the virtual world, where everything can be recorded, copied, and sent to a million places at once, perfect privacy and security are even less plausble.

But to me, this makes it even more important to define and defend a reasonable expectation of privacy and security, especially on my phone and laptop. I know these devices will never be perfectly private and secure. I know that I could spend years learning the most advanced cryptography skills and tighten up my my privacy and security more. I also know I could take the path of least resistance and do nothing, opting for maximum convenience instead. I choose the middle path, of maximizing the privacy and security I can get right now, for a few hours’ work.

Data and tracking and updates and geolocation services are turned off. Caches are cleared. Privacy settings are thoroughly fiddled-with. My phone is as dumb as I can make it. It’s a great feeling!

Technology Ethics: My Seven Year Journey

In 2014 I attended a UX conference in which the closing keynote was about how the future will have no interface. The presenter gushed about the years ahead, when transhumanist devices, the Internet of Things, and ubiquitous sensors will transform our world into one in which the interface, instead of being located on a device, is the world itself:

You won’t have a light switch on your wall; you’ll turn on the light by pointing at it in a decisive way. You won’t learn to draw; you’ll wear a wristband that jiggles your hand as you hold a pencil to paper, causing you to make marks indistinguishable from those that might have been put there by a trained artist. You won’t interact with computer hardware; virtual objects and a keyboard will be projected right onto your desk and you’ll manipulate them as if they were real.

Rather than get excited for this, my reaction was horror and disgust. On my two-hour drive home I wondered if I was in the wrong industry, if maybe I should pack up my family and start life over in some kind of ultra-primitive arrangement in the woods.

I got over the worst of it by the time I arrived home, but I was left wondering why I had had that strong reaction and what it meant. What was it about that future that felt so wrong? What could I learn from it?

Eventually I figured out that what I recoiled from was the threat to my values presented by these technologies. All the sensors and complex circuitry required to make the IoT work violated my desire for simplicity and privacy; the transhumanist devices attacked my sense of authenticity; and so on. Moreover, I feared the alienation I would feel from my children if (or when) they embraced these technologies and absorbed their attendant values.

Amish churches, I came to learn, have successfully tackled this exact problem by drafting ordnungs — written community guidelines, more or less — that include regulations about which technologies may be owned and used as normal, or else must be modified, kept outside the home, or banned entirely. As a result the Amish appear frozen in time to most outsiders, but it is hard to deny they also enjoy the benefits of tight-knit communities, lower rates of physical and mental illness, and are even seeing a decades-long drop in attrition. Apparently being able to control the direction and pace of change in one’s social values comes with a huge payoff.

Although the Amish do not explicitly map technologies to values, this was something I recognized as necessary. In 2015 I devised a design process model in which the values potentially supported or threatened by a proposed technology are evaluated deeply so their primary and secondary effects on users’ lives might be anticipated, and negative effects avoided. I got as far as testing this process model, but the results were unclear. Later I determined my experimental design likely did not control variables tightly enough. Further, I conjectured that the system that includes both technology and modern western social values is probably too complex to predictably influence with a design process model.

I was deflated by this setback, but soon began to notice lots of other people had started talking about ethics in design. It sounded like many people shared my concerns about the future of technology and its impact on society. When I gave a presentation on this topic at Midwest UX in 2017, it seemed like half the other presentations shared the same theme.

(I wrote an article describing my technology ethics journey up to this point, with more detail about the process model and how I tested it, on Pomiet’s website in 2016.)

Shortly afterward I joined a fledgling discussion group, Ethical Technology, founded by someone who’d struck me by his intelligence and clear writing on the subject. Many of the things he said felt like things I’d wanted to say but hadn’t found words for.

The discussion group soon grew quite sizeable but I found the tone of the conversation had changed. We didn’t seem to be talking about technology as much we talked about the people making it. It did not take long for the dialogue to devolve further, all the way into partisan politics. Sometimes technology was merely decoration and not relevant to what people were really talking about; the issues raised seemed purely about political ideology. Disillusioned with both its political fixation and ideological uniformity, I left the discussion group and returned to thinking and reading on my own.

Around that time, during my “day job” I was the UX researcher on a learning management system that was to eventually be rolled out to thousands of employees at the large company where I worked. In our team meetings we frequently discussed the change management aspects of the project, and I came to see how the user experience and the change experience were intricately tied together.

I became fascinated with the field of change management. I read its foundational texts and many of its journal articles, and attended meetings of the local chapter of the ACMP. But I did all this with a critical eye: I wanted to show that those who resist technology change need to be listened to rather than persuaded. This stands as the most recent influence on my thinking about technology ethics.

The success of the Amish is ultimately attributable to the control and self-determination they are able to exercise over their technology adoption decisions. I have come to see that as the most basic truth of technology ethics. The most important aspect of a technology’s design when it comes to ethics is the degree to which the human user can control his or her relationship with that technology. This means the ethics may come from the design of the technology itself, or from the rules and customs that surround it, but will ultimately be determined by the user’s freedom to adopt or reject it.

This also means few technologies are ethically perfect. We give up some freedom of what technologies to use or avoid when we agree to work for various employers, or sometimes even just to live in certain areas. We adopt many technologies simply because they are considered normal and baseline, and we never think twice about it.

Yet awareness of this situation brings into sharper relief the opportunities to make technology more ethical. That is what I hope to do in my work these days, and going forward.

Practice Makes Better: A silver lining to the quarantine

First among Nielsen-Norman’s ten heuristics for a good usability experience is “visibility of system status”, which is a fancy way of saying “feedback”: how clearly, how quickly, and how often is the system showing you its responses to your inputs? Dr. Rob Keefer includes an analogous principle — “Always know how things are going” — in his seven-part Harmonics Way philosophy.

There are a lot of unknowns floating around these days. Will we ever return to normal work and school lives? How much longer will we have to wear these uncomfortable masks whenever we go into a store? Where will the chain reaction set off by the coronavirus eventually take us? It is a time of uncertainty, and if we consider existence as one big system, the virus and its impact are certainly not helping the “visibility of system status”.

Simultaneously, the full days many of us now spend quarantined at home with our families afford us much more feedback about how we’re doing as property owners, as spouses, and as parents. This feedback is linked to the opportunity to practice and improve in these roles.

Musicians who shred for 30 or more hours a week not only get a more intimate feel for their instruments and the music they’re playing; they also get better faster than musicians who only put in an hour or two over that same timespan. In this same way, increased exposure to the “systems” of home life is allowing many of us to find out what we’re really made of when it comes to those areas, and then hopefully to improve.

This can be an intimidating and overwhelming process. Users aren’t (and shouldn’t be) judged by how well they interact with technical systems — for example, how easily they are able to use the checkout process on Amazon.com — but we are judged by how well we interact with our families, jobs, and how well we maintain our property. This creates added stress, especially when there are conflicts or setbacks, but (like most stress) may ultimately represent a chance for huge masses of people to become better homeowners, spouses, and parents.

The Vicious Cycle of Recruiting With Unpaid Work

The pandemic has left lots of people unemployed, including many UX professionals. Some companies are taking advantage of this situation to automate and scale their recruiting. (Historically this has been called “carpetbagging“, and it hasn’t gone away.)  I have seen one company even make the completion of unpaid work part of their application process: after an initial screening interview with a third-party recruiter, the applicant is sent a link to Usertesting.com, where he or she is asked to watch a video of a user interacting with the company’s software and then evaluate the session. Applicants are not compensated for this work, which the recruiter told me takes about three quarters of an hour.

Forty-five minutes of uncompensated work in exchange for a shot at a steady job might seem like an okay deal for someone just entering the field, or who is for other reasons desperate. But if ten people go through this process, then the company has received 7.5 hours of free labor. If 100 people go through this process, then they have received nearly two weeks of free labor. It becomes easy to see how the incentives become misaligned.

When the prospect of being hired is drastically reduced, going through an application process like this is an unambiguously negative experience. Luckily, quality UX candidates have a passion for improving experiences. They want to work somewhere they can put this passion to use, which means a place where their recommendations will be taken seriously. A company that persists in putting people through bad experiences will ultimately fail to attract quality candidates; it’s a vicious cycle.

The unpaid work I described above was ostensibly meant to show the company how the candidate evaluates a usability session. Here are two alternatives to that kind of recruiting method:

  1. Pay applicants for the time they spend evaluating your usability sessions. That at least keeps the incentives more aligned and steers clear of unethical “carpetbagging” practices.
  2. Talk to candidates instead. Quality candidates will be willing to spend time interviewing, because an interview gives them visibility into the process they’re participating in, realtime feedback about how they’re doing, and a personal sense of who they’re going to be working with.

Social interaction and technological mediation under Lockdown

The shelter-in-place order in response to Covid-19 has demonstrated both the necessity and insufficiency of virtual socialization. Technologies like Zoom and Skype (not to mention email) have allowed countless people to remain employed, to check in with each other, and to enjoy some diversion from the tedium of confinement.

But although most states are only in week three or four of “lockdown” I am already seeing accounts of people, particularly those who live alone, suffering emotional breakdowns as a result of their physical isolation. Seeing and hearing each other in real-time is a marvelous thing, but we apparently have a deeper need to be with other people.

Videoconferencing in 3D with goggles and headphones (i.e. VR) might one day become a viable way to fulfill that need. But it presents an additional hurdle: participants will have most of their faces covered by cumbersome equipment. One solution is for participants to represent themselves with 3D avatars.. But this detracts significantly from the verisimilitude of the experience, which was supposed to be the whole point.

What is less clear is whether people will care. Maybe feeling as if you’re around someone else, even if that someone looks like an obviously-fake 3D avatar, is still psychologically preferable to interacting with a video representation of a person, even a realistic one, if that representation must be mediated by a screen held at arm’s length.

“A System to Change the Culture”

The title of this blog post is a quote from Michael Corboy, the assistant commissioner of police in New South Wales, Australia. He used that phrase to describe the introduction of traffic cameras that use an AI to detect when drivers are on the phone.

I think it’s a profound phrase. In one sense, it’s backward from how we normally like to think about the relationship between technology systems and culture: we want our culture to grow organically, and our technology systems to be designed around them, in a humane way that preserves and supports our values. Intuitively, the culture should affect the system rather than the other way around.

But in another sense, this acknowledges a very real and basic phenomenon that happens any time a system is introduced into a culture: the culture changes. Now, the intended changes rarely obtain exactly the way they’re meant to, at least without unintended side-effects, but the relationship between human culture and manmade systems is definitely a two-way street.

These traffic cameras will have some impact on traffic safety in NSW. And, they will incite some amount of backlash from people who feel intruded upon by Big Brother. But a lot of people will respond with indifference, and these cameras might even further normalize and legitimize the idea of high-tech government surveillance.

From the government’s standpoint, it will be nearly impossible to go back to a lower-tech alternative if this initiative does not succeed, so these cameras also mean a redefining of what it means for law enforcement to do their job. They signal an increasing dependence on computers and automation to replace human labor and judgment. And will the cameras actually change Australians’ culture around traffic safety? If so, how?

Time will tell whether the introduction of these cameras is a good thing in the end, but as always it is much bigger than just the adoption of one system.

Another writer exposes the terrible downsides of a new technology — but keeps using it!

Not only did Alli Conti get scammed on Airbnb, she uncovered a big ring of scams that exploit baked-in security weaknesses of the site, its rules, and the expectations of its users. But…

Even after a month of digging through public records, scouring the internet for clues, repeatedly calling Airbnb and confronting the [scammer] who called himself Patrick, I can’t say I’ll be leaving the platform, either. Dealing with Airbnb’s easily exploitable and occasionally crazy-making system is still just a bit cheaper than renting a hotel.

Conti’s message to Airbnb is effectively “Don’t worry about fixing these problems, I’ll keep using your site anyway so long as an Airbnb is marginally cheaper than alternatives.” Not counting all the indirect costs, of course.

I suspect there’s something else going on under the surface: a reluctance to go back to reserving rooms in hotels, simply because it is the “old way”. It doesn’t feel as hip or fresh or exciting — or dare I say fashionable? — to book a room in a hotel, and it doesn’t fit the narrative people have told themselves about what travel is supposed to look like in 2019. But that is a narrative, and not only is it an arbitrary one, it’s harmful in the case of people who are unwilling to change their consumer behavior in response to serious problems.