Bottom-up standardization

For the past year my team at Ford has been working on a project that involves us visiting the company’s various manufacturing plants across North America, where we’ve been conducting interviews and contextual inquiries with workers. Although we are trying to understand a technology problem that is common to basically all the sites we’ve visited, each one has its own needs, culture, and way of doing things, and each is affected by this technology problem in slightly different ways.

Once our research is turned into a design and that design has been piloted, my team knows there will be some pressure to standardize it across all sites. Meanwhile the various sites each have their own idea of what an ideal solution to this technology problem should be.

Many large organizations I’ve worked for have experienced some version of this same scenario, and it has me thinking about how the success of standardizing any solution depends not only on the solution itself, but on how the process of standardization is approached.

There are advantages and disadvantages to both centralized and localized solutions, but they are typically framed as being in opposition. How can companies break out of this stalemate?

I decided to reframe the situation as something other than a “problem”, in a way that will hopefully be more illuminating. And I took a lesson from several instances where standardization has been achieved with more success.

Understanding a situation as a problem sets up an expectation that you can “solve” it once and for all. If we take our pet solution and try to justify it by overcoming objections, natural human biases will inevitably take over. This is how we end up with the familiar story of leadership ramming an unwanted solution down employees’ throats, or employees avoiding or even sabotaging an unwanted solution rolled out to them by leadership.

It is necessary to deeply understand both sides of the situation. One of the more stimulating things about UX research is getting to hear from different sets of stakeholders, which provides a unique vantage point that I’d liken to stereoscopy. When different sets of stakeholders believe they are in conflict with each other, it can even be like seeing in an extra dimension, almost like cubism.

Calvin and Hobbes goes cubist when Calvin's dad convinced him to see both sides of an issue.
Bill Watterson masterfully illustrates what happens when Calvin sees both sides of an issue.

In my own words, standardization is when you take something that provides benefits in one context and put that same thing into lots of different contexts, with the aim of reaping not only those same benefits over and over again, but additional benefits that come just from having created uniformity.

But it doesn’t always work out this way. Sometimes the other contexts are a poor fit, and often the end users push back, sometimes rightly so. I’d bet everyone can think of some examples of times when attempts to standardize were justifiably unwelcome.

So, what are some strong arguments for and against both standardized or “centralized” solutions and customized or “local” solutions? Below are some I’ve heard over the years:

A table listing pros and cons of both centralized and localized solutions.
I tried to organize this table so pros and cons on the same line mostly corresponded with each other, but really they are in no particular order.

By the way, there is scholarly research backing up some of the items in this table. For example, Farrell et al. published a paper in the 80s in the Rand Journal of Economics about how one of the risks of standardization is it can lock you into an inferior product; their paper has since been cited thousands of times, indicating the enduring salience of that insight.

A side benefit of laying out pros and cons like this is it prompts us to think about the practical and emotional impact of the change so it can be more effectively communicated about when the time comes. And importantly, this exercise decouples our own emotions from whatever solution we ultimately pursue, which helps to make subsequent decisions less prone to biases and blind spots.

With these arguments laid out, it is clear there is no neat easy answer that would please everyone and resolve all the misalignment. It is also clear we are really talking about trade-offs. Whatever the solution, it will need to manage these trade-offs.

Internalizing all this allowed it to become a filter as I absorbed other ideas and examples. It didn’t take long before I stumbled across two that were revelatory.

The first example was from US legal codes. Our system of federalism defines certain responsibilities as those of the Federal government, and leaves others to the states. However, states sometimes have an interest in working from the same set of uniform laws as other states, but in areas outside the proper jurisdiction of the Federal government.

When this happens, I learned, what gets created is called a Uniform Act. These are written collaboratively among different states. There are more than a hundred of them, typically concerning matters related to commerce but sometimes other things like child custody, land use, foreign judgments, controlled substances, and so on. 

The way it works is various state governors appoint members to a body called the Uniform Law Commission, who drafts a copy of whatever act the states will be asked to sign on to. The Commission has no direct legislative power; instead, the acts it drafts only become state laws as each state legislature signs onto them. 

This caught my eye because we are used to hearing about standardization as a top-down thing, but this is essentially a form of bottom-up standardization: using a system of representative government, the people who would be the subjects of standardization get together and decide what it is they’re standardizing to, and then agree to it before it is implemented. And they agree to it because they buy into the idea and think it will work for them. 

How could something like that work for various local sites within large companies? What if there was a technology solution that various sites could opt into? It would require some marketing, in the sense of making sure all the sites knew the system was available, what it entailed, what its benefits were, and how to onboard it, but sites that did make an informed decision to opt in would presumably reap those benefits, without feeling like the solution was foisted on them by ivory tower executives in HQ.

Remember that even an okay system met with enthusiasm by its users is probably going to work better than a perfect system that users feel resentful about.   

The second example was ISO, or the International Organization for Standardization, which oversees the implementation of tens of thousands of technical standards for everything from shipping containers to the light-sensitivity of photographic film. ISO is the reason the nuts you buy at one hardware store match the bolts you might buy at another hardware store—even in another country. Name just about any manufactured product and there’s a good chance there are one or more of these standards in its specifications. ISO standards are at work behind almost every facet of ordinary life and most people don’t realize it.  

But in the last few decades there has been rising controversy over who sets these standards and what they should be. One instance of this involves the increasing share of Chinese manufacturing in the global economy, which has led to a push from China saying they should have a larger seat at the table in developing standards. In an effort to preempt this, national governments worldwide have become more eager to dictate (from the top down) their own standards to whatever manufacturing is within their jurisdiction. 

Advocacy for bottom-up standardization has come from people such as Duff Johnson, Executive Director of the PDF Association, who used that exact term in an article last year in which he said “standards work best for everyone when they are developed openly”, calling for an “organic process”. He recommends that governments engage with industry at the subject matter expert level, creating a win-win in which the government gains expertise, skills, and context, and the industry members can better understand government viewpoints and interests. 

This provides an important perspective on how companies might standardize technology solutions. Executive stakeholders should engage with ground-level employees on the SME level, where each can benefit from exposure to and empathy with the other’s working paradigm. Fortunately, Duff Johnson’s suggestions reflect the way my UX research team is already going about our various projects. 

We have a responsibility to ensure corporate stakeholders understand the SME perspective, and at the same time we have to address the corporate interest in standardization, with its attendant tradeoffs but also undeniable benefits. Our interviews and contextual inquiries are those engagements with subject matter experts, and the insights we collect and synthesize will be shared with corporate stakeholders. So in addition to providing guidelines for the technology designers and developers we work with, this insight about bottom-up standardization represents an opportunity to realize we are forging a bridge between the highest and lowest levels of our company.

Advertisement

Technology Ethics: My Seven Year Journey

In 2014 I attended a UX conference in which the closing keynote was about how the future will have no interface. The presenter gushed about the years ahead, when transhumanist devices, the Internet of Things, and ubiquitous sensors will transform our world into one in which the interface, instead of being located on a device, is the world itself:

You won’t have a light switch on your wall; you’ll turn on the light by pointing at it in a decisive way. You won’t learn to draw; you’ll wear a wristband that jiggles your hand as you hold a pencil to paper, causing you to make marks indistinguishable from those that might have been put there by a trained artist. You won’t interact with computer hardware; virtual objects and a keyboard will be projected right onto your desk and you’ll manipulate them as if they were real.

Rather than get excited for this, my reaction was horror and disgust. On my two-hour drive home I wondered if I was in the wrong industry, if maybe I should pack up my family and start life over in some kind of ultra-primitive arrangement in the woods.

I got over the worst of it by the time I arrived home, but I was left wondering why I had had that strong reaction and what it meant. What was it about that future that felt so wrong? What could I learn from it?

Eventually I figured out that what I recoiled from was the threat to my values presented by these technologies. All the sensors and complex circuitry required to make the IoT work violated my desire for simplicity and privacy; the transhumanist devices attacked my sense of authenticity; and so on. Moreover, I feared the alienation I would feel from my children if (or when) they embraced these technologies and absorbed their attendant values.

Amish churches, I came to learn, have successfully tackled this exact problem by drafting ordnungs — written community guidelines, more or less — that include regulations about which technologies may be owned and used as normal, or else must be modified, kept outside the home, or banned entirely. As a result the Amish appear frozen in time to most outsiders, but it is hard to deny they also enjoy the benefits of tight-knit communities, lower rates of physical and mental illness, and are even seeing a decades-long drop in attrition. Apparently being able to control the direction and pace of change in one’s social values comes with a huge payoff.

Although the Amish do not explicitly map technologies to values, this was something I recognized as necessary. In 2015 I devised a design process model in which the values potentially supported or threatened by a proposed technology are evaluated deeply so their primary and secondary effects on users’ lives might be anticipated, and negative effects avoided. I got as far as testing this process model, but the results were unclear. Later I determined my experimental design likely did not control variables tightly enough. Further, I conjectured that the system that includes both technology and modern western social values is probably too complex to predictably influence with a design process model.

I was deflated by this setback, but soon began to notice lots of other people had started talking about ethics in design. It sounded like many people shared my concerns about the future of technology and its impact on society. When I gave a presentation on this topic at Midwest UX in 2017, it seemed like half the other presentations shared the same theme.

(I wrote an article describing my technology ethics journey up to this point, with more detail about the process model and how I tested it, on Pomiet’s website in 2016.)

Shortly afterward I joined a fledgling discussion group, Ethical Technology, founded by someone who’d struck me by his intelligence and clear writing on the subject. Many of the things he said felt like things I’d wanted to say but hadn’t found words for.

The discussion group soon grew quite sizeable but I found the tone of the conversation had changed. We didn’t seem to be talking about technology as much we talked about the people making it. It did not take long for the dialogue to devolve further, all the way into partisan politics. Sometimes technology was merely decoration and not relevant to what people were really talking about; the issues raised seemed purely about political ideology. Disillusioned with both its political fixation and ideological uniformity, I left the discussion group and returned to thinking and reading on my own.

Around that time, during my “day job” I was the UX researcher on a learning management system that was to eventually be rolled out to thousands of employees at the large company where I worked. In our team meetings we frequently discussed the change management aspects of the project, and I came to see how the user experience and the change experience were intricately tied together.

I became fascinated with the field of change management. I read its foundational texts and many of its journal articles, and attended meetings of the local chapter of the ACMP. But I did all this with a critical eye: I wanted to show that those who resist technology change need to be listened to rather than persuaded. This stands as the most recent influence on my thinking about technology ethics.

The success of the Amish is ultimately attributable to the control and self-determination they are able to exercise over their technology adoption decisions. I have come to see that as the most basic truth of technology ethics. The most important aspect of a technology’s design when it comes to ethics is the degree to which the human user can control his or her relationship with that technology. This means the ethics may come from the design of the technology itself, or from the rules and customs that surround it, but will ultimately be determined by the user’s freedom to adopt or reject it.

This also means few technologies are ethically perfect. We give up some freedom of what technologies to use or avoid when we agree to work for various employers, or sometimes even just to live in certain areas. We adopt many technologies simply because they are considered normal and baseline, and we never think twice about it.

Yet awareness of this situation brings into sharper relief the opportunities to make technology more ethical. That is what I hope to do in my work these days, and going forward.

Stop pathologizing change resistance!

Change Management professionals are fond of pointing out humans’ many cognitive biases, which contribute to people’s supposed resistance to various kinds of change. Reference is also often made to the fact that most categories of human emotion are negative, and that change is threatening to people for a long list of emotional reasons related to things like status, or the feeling of insecurity that comes with having to learn to perform tasks in a new way.

It’s easy to come away from these messages with a picture of change resisters as damaged, fragile victims, who respond to change only with irrational defensive emotions, and who need to be “managed“, “dealt with“, “addressed” (and compared to toddlers!), and “overcome“.

In my career I have listened to countless people within various organizations tell me about workplace changes they resisted. In every single case these accounts centered around specific, often tangible negative impacts and interactions the changes were causing: doctors were forced by a new electronic records system to interact primarily with screens instead of patients; accountants had to do double entry in a new piece of software that was confusing and error-prone; engineers found their new ordering tool required them to enter extra, redundant search information while producing results that were unhelpful and irrelevant.

Without talking to people like this and hearing their stories, one could get the impression they were just being pulled along by their familiarity bias, or that they were simply fearful of the loss of status that the newly implemented systems represented. Their condition, one might think, is unfortunate, but ultimately they need to (in the words of one change leader I overheard) “get over it.”

In reality, people seem to usually resist change for good reasons: the new thing is flawed; the new thing is incomplete; the new thing is not communicated about effectively or truthfully; the new thing is not needed; the new thing is not the right solution; the new thing provides a worse interaction experience than the old thing; no training on the new thing was provided, or it was provided at the wrong time, or the training was of low quality; no support for the new thing was offered; etc.

Furthermore, over my years of interviewing people, everyone I’ve asked about workplace change has expressed some variant of this realistic and positive attitude: “Change is inevitable, and I do my best to adapt to it even if I don’t always like it.” Most people I’ve talked to could name both positive and negative workplace technology changes they’d experienced, as well as both technology changes that were forced on them and ones they undertook of their own will.

Pathologizing change resistance is especially damaging because it gives managers and executives the idea that they ought not to question or challenge the latest trends, lest they be found to be suffering the same pathologies as their Luddite employees. This contributes to a kind of Emperor’s New Clothes problem. In the end it’s everyone — not just the “emperor” — who bears the brunt of the bad decision to adopt the change.

The way to avoid this problem is to stop treating change resisters as obstacles, and instead use them as a front-line resource. Some texts give only the merest lip service to seriously engaging change resisters (for example, the 100-page book “The Eight Constants of Change” devotes exactly one paragraph to it) and even then, it is typically done as an afterthought. That is a backward approach.

The people identified as change resisters are really the ones who have the answers to questions like:

  • “What change does our organization actually need to make?”
  • “What are we doing well and should keep doing?”
  • “What makes this organization a place where people want to work?”
  • “What factors go into a successful change?”

These are the kinds of questions that need to be answered before any significant workplace change is considered, which means the so-called change resisters should be engaged right at the beginning, and their considerations taken seriously.

If nothing else, giving employees the impression they are not heard is a way to ensure that a workplace change will fail.

Data analytics, change, and ethics

Much ado is made about data-driven decision-making. Why do things the old-fashioned way with reports written by slow humans when you can make decisions based on vast quantities of realtime data compiled by automated systems, displayed in the most (ostensibly) helpful ways?

The firehose of data from which we are encouraged to drink, and to which our own activity contributes and from which others then drink and act, has a mixed reputation. Nobody would argue that informed decision-making is worse than flying blind, and in certain cases the “more data=better” curve really is a linear diagonal up and to the right. But at the same time, most people instinctively recoil from the collection and use of data in a growing set of instances where it feels invasive, unnecessary, and even “creepy.”

Take the well-known case (perhaps somewhat mythologized at this point) of the dad who found out his teenage daughter was pregnant because the big-box retailer Target tracked the daughter’s shopping habits and, identifying her as pregnant, proactively sent baby formula coupons to the household. It may be true that the dad would have had other more direct opportunities to find out about his daughter’s pregnancy eventually, but most people still see what happened as a violation of some kind.

Target was taking advantage of all the data available to them in order to maximize revenue, just as all businesses are coached to do, with the result that they intruded upon a delicate family situation and maybe even crossed a line with respect to privacy and ethics. To what extent are other companies taking notice of this and learning lessons from it?

The language of change management is often fatalistic: “This is what the future is going to look like, this is where your industry is headed, so you’d better do X or else get left behind.” This creates an environment where it’s easy to forget that even the biggest overarching changes are built from decisions made at the most granular levels, and that we actually have control over our technology choices. “No thanks” is always on the table even if we aren’t thinking about it.

The urgency with which companies are coached to adopt the latest technologies is not necessarily valid. Sometimes it’s better to hang back and wait, or at least to implement a change gradually and cautiously, so that the ethical boundaries of the new technology can be figured out and adhered to. It might be better for the bottom line to ask forgiveness rather than permission, but it isn’t always the right thing to do, and it can get you into trouble later on.