Technology Ethics: My Seven Year Journey

In 2014 I attended a UX conference in which the closing keynote was about how the future will have no interface. The presenter gushed about the years ahead, when transhumanist devices, the Internet of Things, and ubiquitous sensors will transform our world into one in which the interface, instead of being located on a device, is the world itself:

You won’t have a light switch on your wall; you’ll turn on the light by pointing at it in a decisive way. You won’t learn to draw; you’ll wear a wristband that jiggles your hand as you hold a pencil to paper, causing you to make marks indistinguishable from those that might have been put there by a trained artist. You won’t interact with computer hardware; virtual objects and a keyboard will be projected right onto your desk and you’ll manipulate them as if they were real.

Rather than get excited for this, my reaction was horror and disgust. On my two-hour drive home I wondered if I was in the wrong industry, if maybe I should pack up my family and start life over in some kind of ultra-primitive arrangement in the woods.

I got over the worst of it by the time I arrived home, but I was left wondering why I had had that strong reaction and what it meant. What was it about that future that felt so wrong? What could I learn from it?

Eventually I figured out that what I recoiled from was the threat to my values presented by these technologies. All the sensors and complex circuitry required to make the IoT work violated my desire for simplicity and privacy; the transhumanist devices attacked my sense of authenticity; and so on. Moreover, I feared the alienation I would feel from my children if (or when) they embraced these technologies and absorbed their attendant values.

Amish churches, I came to learn, have successfully tackled this exact problem by drafting ordnungs — written community guidelines, more or less — that include regulations about which technologies may be owned and used as normal, or else must be modified, kept outside the home, or banned entirely. As a result the Amish appear frozen in time to most outsiders, but it is hard to deny they also enjoy the benefits of tight-knit communities, lower rates of physical and mental illness, and are even seeing a decades-long drop in attrition. Apparently being able to control the direction and pace of change in one’s social values comes with a huge payoff.

Although the Amish do not explicitly map technologies to values, this was something I recognized as necessary. In 2015 I devised a design process model in which the values potentially supported or threatened by a proposed technology are evaluated deeply so their primary and secondary effects on users’ lives might be anticipated, and negative effects avoided. I got as far as testing this process model, but the results were unclear. Later I determined my experimental design likely did not control variables tightly enough. Further, I conjectured that the system that includes both technology and modern western social values is probably too complex to predictably influence with a design process model.

I was deflated by this setback, but soon began to notice lots of other people had started talking about ethics in design. It sounded like many people shared my concerns about the future of technology and its impact on society. When I gave a presentation on this topic at Midwest UX in 2017, it seemed like half the other presentations shared the same theme.

(I wrote an article describing my technology ethics journey up to this point, with more detail about the process model and how I tested it, on Pomiet’s website in 2016.)

Shortly afterward I joined a fledgling discussion group, Ethical Technology, founded by someone who’d struck me by his intelligence and clear writing on the subject. Many of the things he said felt like things I’d wanted to say but hadn’t found words for.

The discussion group soon grew quite sizeable but I found the tone of the conversation had changed. We didn’t seem to be talking about technology as much we talked about the people making it. It did not take long for the dialogue to devolve further, all the way into partisan politics. Sometimes technology was merely decoration and not relevant to what people were really talking about; the issues raised seemed purely about political ideology. Disillusioned with both its political fixation and ideological uniformity, I left the discussion group and returned to thinking and reading on my own.

Around that time, during my “day job” I was the UX researcher on a learning management system that was to eventually be rolled out to thousands of employees at the large company where I worked. In our team meetings we frequently discussed the change management aspects of the project, and I came to see how the user experience and the change experience were intricately tied together.

I became fascinated with the field of change management. I read its foundational texts and many of its journal articles, and attended meetings of the local chapter of the ACMP. But I did all this with a critical eye: I wanted to show that those who resist technology change need to be listened to rather than persuaded. This stands as the most recent influence on my thinking about technology ethics.

The success of the Amish is ultimately attributable to the control and self-determination they are able to exercise over their technology adoption decisions. I have come to see that as the most basic truth of technology ethics. The most important aspect of a technology’s design when it comes to ethics is the degree to which the human user can control his or her relationship with that technology. This means the ethics may come from the design of the technology itself, or from the rules and customs that surround it, but will ultimately be determined by the user’s freedom to adopt or reject it.

This also means few technologies are ethically perfect. We give up some freedom of what technologies to use or avoid when we agree to work for various employers, or sometimes even just to live in certain areas. We adopt many technologies simply because they are considered normal and baseline, and we never think twice about it.

Yet awareness of this situation brings into sharper relief the opportunities to make technology more ethical. That is what I hope to do in my work these days, and going forward.

The elusive “questioning attitude”: What it is, Why it’s important, and How to cultivate it

An internet search for “questioning attitude” (include the quotes if you’re trying this at home) turns up article after article about this worker characteristic. It’s often stated to be a desirable trait in disciplines like nuclear power generation, construction management, and other industries where errors can be extremely costly, even deadly. You want people in those fields who are not satisfied that “everything looks OK” just because it seems so at first glance. This trait is also crucial in the social sciences, including my occupation, UX research.

When a researcher has a questioning attitude it doesn’t just mean “likes asking lots of questions”. Asking questions is fundamental to the job, but a questioning attitude is about being aware of the assumptions one makes and then testing whether those assumptions are true. If they aren’t true, then they get discarded. While designing those tests and interpreting the results one still needs to be constantly on the lookout for unfounded assumptions in a kind of recursive pattern all the way down, ensuring experimental design and the findings generated rest on solid bedrock of what has been established as true (as best as can be ascertained given the constraints).

A questioning attitude doesn’t stop being important just because the immediate risks seem low. Any technology built without a questioning attitude can have dramatic negative effects: an annoying Learning Management System can set people back in their careers; a frustrating payment workflow can cost a company millions of dollars; a facial recognition algorithm that’s relatively blind to people of certain races can cause those people to feel alienated or worse. These problems often happen because the designers of the technology work off of assumptions derived from what is familiar to them, and don’t consider that they might not be representative of their users.

If a questioning attitude means being aware of one’s own assumptions, how does one gain that awareness? Based on reading accounts of people who demonstrate a strong questioning attitude, talking to senior colleagues, and drawing from my own experience, I think this awareness can be cultivated from exposure to unusual and uncomfortable situations. Getting to closely know people who are very different from yourself, living in a far-away place that’s very unlike where you’ve spent most of your life, and learning and performing a wide variety of new skills are some of the ways to expose yourself to these situations.

Part of what this provides is the ability to switch into a “man from Mars” mentality, where you can see things afresh, without value judgment or preconceived notions (similar to what I learned from searching through a hot dumpster for a pair of lost keys). Critically, it also hones a rebellious instinct to look where nobody else around you is looking, to draw connections where nobody else is drawing them, etc. Not every place you look and not every connection you draw will be valid, but without this rebellious instinct important considerations are bound to be forgotten.

Humans naturally (by virtue of genetics and formative development) have different levels of self-awareness and rebelliousness, so to some extent the amount of “questioning attitude” present among UX researchers at a given company could have to do with which researchers that company hires. But I believe it is still a skill that can be maximized for each individual, and should be to produce the best design outcomes.